2025-09-17 15:18:18.628447 | Job console starting 2025-09-17 15:18:18.645091 | Updating git repos 2025-09-17 15:18:18.730781 | Cloning repos into workspace 2025-09-17 15:18:18.905917 | Restoring repo states 2025-09-17 15:18:18.924660 | Merging changes 2025-09-17 15:18:18.924680 | Checking out repos 2025-09-17 15:18:19.291021 | Preparing playbooks 2025-09-17 15:18:19.904321 | Running Ansible setup 2025-09-17 15:18:23.987578 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-17 15:18:24.725237 | 2025-09-17 15:18:24.725393 | PLAY [Base pre] 2025-09-17 15:18:24.742093 | 2025-09-17 15:18:24.742230 | TASK [Setup log path fact] 2025-09-17 15:18:24.773612 | orchestrator | ok 2025-09-17 15:18:24.790827 | 2025-09-17 15:18:24.791020 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-17 15:18:24.830567 | orchestrator | ok 2025-09-17 15:18:24.843536 | 2025-09-17 15:18:24.843646 | TASK [emit-job-header : Print job information] 2025-09-17 15:18:24.893348 | # Job Information 2025-09-17 15:18:24.893529 | Ansible Version: 2.16.14 2025-09-17 15:18:24.893565 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-09-17 15:18:24.893600 | Pipeline: post 2025-09-17 15:18:24.893624 | Executor: 521e9411259a 2025-09-17 15:18:24.893645 | Triggered by: https://github.com/osism/testbed/commit/e90d59ea1e285c9e2a064db2dfb8568e5853aee0 2025-09-17 15:18:24.893667 | Event ID: 85de28c8-93d9-11f0-8fc0-160a3ba7cf29 2025-09-17 15:18:24.900459 | 2025-09-17 15:18:24.900579 | LOOP [emit-job-header : Print node information] 2025-09-17 15:18:25.016760 | orchestrator | ok: 2025-09-17 15:18:25.017082 | orchestrator | # Node Information 2025-09-17 15:18:25.017133 | orchestrator | Inventory Hostname: orchestrator 2025-09-17 15:18:25.017160 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-17 15:18:25.017182 | orchestrator | Username: zuul-testbed06 2025-09-17 15:18:25.017203 | orchestrator | Distro: Debian 12.12 2025-09-17 15:18:25.017229 | orchestrator | Provider: static-testbed 2025-09-17 15:18:25.017250 | orchestrator | Region: 2025-09-17 15:18:25.017271 | orchestrator | Label: testbed-orchestrator 2025-09-17 15:18:25.017291 | orchestrator | Product Name: OpenStack Nova 2025-09-17 15:18:25.017310 | orchestrator | Interface IP: 81.163.193.140 2025-09-17 15:18:25.038302 | 2025-09-17 15:18:25.038425 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-17 15:18:25.497938 | orchestrator -> localhost | changed 2025-09-17 15:18:25.506231 | 2025-09-17 15:18:25.506358 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-17 15:18:26.525343 | orchestrator -> localhost | changed 2025-09-17 15:18:26.539575 | 2025-09-17 15:18:26.539690 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-17 15:18:26.812076 | orchestrator -> localhost | ok 2025-09-17 15:18:26.823993 | 2025-09-17 15:18:26.824180 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-17 15:18:26.853938 | orchestrator | ok 2025-09-17 15:18:26.872235 | orchestrator | included: /var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-17 15:18:26.880803 | 2025-09-17 15:18:26.880901 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-17 15:18:27.858659 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-17 15:18:27.859024 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/8ca14ae0db454334a6aae0a72c6275fe_id_rsa 2025-09-17 15:18:27.859094 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/8ca14ae0db454334a6aae0a72c6275fe_id_rsa.pub 2025-09-17 15:18:27.859141 | orchestrator -> localhost | The key fingerprint is: 2025-09-17 15:18:27.859186 | orchestrator -> localhost | SHA256:TUXuHVphQ8Srn3u+ywlVubqTgtAl+LDsx80jQEajgt8 zuul-build-sshkey 2025-09-17 15:18:27.859224 | orchestrator -> localhost | The key's randomart image is: 2025-09-17 15:18:27.859280 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-17 15:18:27.859319 | orchestrator -> localhost | | .o+* | 2025-09-17 15:18:27.859355 | orchestrator -> localhost | | o o ..o.| 2025-09-17 15:18:27.859391 | orchestrator -> localhost | | . o o . . oo.| 2025-09-17 15:18:27.859424 | orchestrator -> localhost | | . . . = + o +..o| 2025-09-17 15:18:27.859459 | orchestrator -> localhost | | . o + S + o..o | 2025-09-17 15:18:27.859503 | orchestrator -> localhost | | . E = o . o | 2025-09-17 15:18:27.859539 | orchestrator -> localhost | | . + + +.. | 2025-09-17 15:18:27.859577 | orchestrator -> localhost | | . = = o* o| 2025-09-17 15:18:27.859613 | orchestrator -> localhost | | . . o.oO+| 2025-09-17 15:18:27.859649 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-17 15:18:27.859739 | orchestrator -> localhost | ok: Runtime: 0:00:00.488306 2025-09-17 15:18:27.871207 | 2025-09-17 15:18:27.871333 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-17 15:18:27.904166 | orchestrator | ok 2025-09-17 15:18:27.916394 | orchestrator | included: /var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-17 15:18:27.925740 | 2025-09-17 15:18:27.925855 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-17 15:18:27.948968 | orchestrator | skipping: Conditional result was False 2025-09-17 15:18:27.957768 | 2025-09-17 15:18:27.957873 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-17 15:18:28.648223 | orchestrator | changed 2025-09-17 15:18:28.658576 | 2025-09-17 15:18:28.658685 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-17 15:18:28.927659 | orchestrator | ok 2025-09-17 15:18:28.936147 | 2025-09-17 15:18:28.936276 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-17 15:18:29.351493 | orchestrator | ok 2025-09-17 15:18:29.360407 | 2025-09-17 15:18:29.360539 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-17 15:18:29.763113 | orchestrator | ok 2025-09-17 15:18:29.773075 | 2025-09-17 15:18:29.773234 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-17 15:18:29.801281 | orchestrator | skipping: Conditional result was False 2025-09-17 15:18:29.808134 | 2025-09-17 15:18:29.808241 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-17 15:18:30.239237 | orchestrator -> localhost | changed 2025-09-17 15:18:30.262398 | 2025-09-17 15:18:30.262526 | TASK [add-build-sshkey : Add back temp key] 2025-09-17 15:18:30.641587 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/8ca14ae0db454334a6aae0a72c6275fe_id_rsa (zuul-build-sshkey) 2025-09-17 15:18:30.641941 | orchestrator -> localhost | ok: Runtime: 0:00:00.020578 2025-09-17 15:18:30.649404 | 2025-09-17 15:18:30.649519 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-17 15:18:31.071066 | orchestrator | ok 2025-09-17 15:18:31.079986 | 2025-09-17 15:18:31.080124 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-17 15:18:31.104391 | orchestrator | skipping: Conditional result was False 2025-09-17 15:18:31.160609 | 2025-09-17 15:18:31.160740 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-17 15:18:31.545051 | orchestrator | ok 2025-09-17 15:18:31.559913 | 2025-09-17 15:18:31.560058 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-17 15:18:31.600556 | orchestrator | ok 2025-09-17 15:18:31.608131 | 2025-09-17 15:18:31.608238 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-17 15:18:31.896476 | orchestrator -> localhost | ok 2025-09-17 15:18:31.911861 | 2025-09-17 15:18:31.912081 | TASK [validate-host : Collect information about the host] 2025-09-17 15:18:33.083165 | orchestrator | ok 2025-09-17 15:18:33.101721 | 2025-09-17 15:18:33.101843 | TASK [validate-host : Sanitize hostname] 2025-09-17 15:18:33.166955 | orchestrator | ok 2025-09-17 15:18:33.175426 | 2025-09-17 15:18:33.175714 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-17 15:18:33.742559 | orchestrator -> localhost | changed 2025-09-17 15:18:33.757856 | 2025-09-17 15:18:33.758043 | TASK [validate-host : Collect information about zuul worker] 2025-09-17 15:18:34.188517 | orchestrator | ok 2025-09-17 15:18:34.197089 | 2025-09-17 15:18:34.197229 | TASK [validate-host : Write out all zuul information for each host] 2025-09-17 15:18:34.749638 | orchestrator -> localhost | changed 2025-09-17 15:18:34.760920 | 2025-09-17 15:18:34.761081 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-17 15:18:35.051573 | orchestrator | ok 2025-09-17 15:18:35.060623 | 2025-09-17 15:18:35.060746 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-17 15:19:05.532241 | orchestrator | changed: 2025-09-17 15:19:05.532552 | orchestrator | .d..t...... src/ 2025-09-17 15:19:05.532610 | orchestrator | .d..t...... src/github.com/ 2025-09-17 15:19:05.532652 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-17 15:19:05.532688 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-17 15:19:05.532721 | orchestrator | RedHat.yml 2025-09-17 15:19:05.549080 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-17 15:19:05.549098 | orchestrator | RedHat.yml 2025-09-17 15:19:05.549151 | orchestrator | = 1.53.0"... 2025-09-17 15:19:19.765790 | orchestrator | 15:19:19.765 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-17 15:19:19.803836 | orchestrator | 15:19:19.803 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-17 15:19:19.980211 | orchestrator | 15:19:19.979 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-17 15:19:20.749341 | orchestrator | 15:19:20.748 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-17 15:19:20.861413 | orchestrator | 15:19:20.861 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-17 15:19:21.287123 | orchestrator | 15:19:21.286 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-17 15:19:21.365271 | orchestrator | 15:19:21.365 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-17 15:19:21.841580 | orchestrator | 15:19:21.840 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-17 15:19:21.841629 | orchestrator | 15:19:21.840 STDOUT terraform: Providers are signed by their developers. 2025-09-17 15:19:21.841635 | orchestrator | 15:19:21.840 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-17 15:19:21.841640 | orchestrator | 15:19:21.841 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-17 15:19:21.841644 | orchestrator | 15:19:21.841 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-17 15:19:21.841651 | orchestrator | 15:19:21.841 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-17 15:19:21.841657 | orchestrator | 15:19:21.841 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-17 15:19:21.841662 | orchestrator | 15:19:21.841 STDOUT terraform: you run "tofu init" in the future. 2025-09-17 15:19:21.841666 | orchestrator | 15:19:21.841 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-17 15:19:21.841669 | orchestrator | 15:19:21.841 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-17 15:19:21.841673 | orchestrator | 15:19:21.841 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-17 15:19:21.841677 | orchestrator | 15:19:21.841 STDOUT terraform: should now work. 2025-09-17 15:19:21.841681 | orchestrator | 15:19:21.841 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-17 15:19:21.841685 | orchestrator | 15:19:21.841 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-17 15:19:21.841690 | orchestrator | 15:19:21.841 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-17 15:19:21.944073 | orchestrator | 15:19:21.943 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-17 15:19:21.944149 | orchestrator | 15:19:21.943 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-17 15:19:22.129927 | orchestrator | 15:19:22.129 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-17 15:19:22.132978 | orchestrator | 15:19:22.129 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-17 15:19:22.133002 | orchestrator | 15:19:22.130 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-17 15:19:22.133007 | orchestrator | 15:19:22.130 STDOUT terraform: for this configuration. 2025-09-17 15:19:22.266459 | orchestrator | 15:19:22.266 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-17 15:19:22.266532 | orchestrator | 15:19:22.266 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-17 15:19:22.380002 | orchestrator | 15:19:22.379 STDOUT terraform: ci.auto.tfvars 2025-09-17 15:19:22.384886 | orchestrator | 15:19:22.384 STDOUT terraform: default_custom.tf 2025-09-17 15:19:22.490756 | orchestrator | 15:19:22.490 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-09-17 15:19:23.574096 | orchestrator | 15:19:23.573 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-17 15:19:24.115867 | orchestrator | 15:19:24.115 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-17 15:19:24.403534 | orchestrator | 15:19:24.403 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-17 15:19:24.403613 | orchestrator | 15:19:24.403 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-17 15:19:24.403621 | orchestrator | 15:19:24.403 STDOUT terraform:  + create 2025-09-17 15:19:24.403667 | orchestrator | 15:19:24.403 STDOUT terraform:  <= read (data resources) 2025-09-17 15:19:24.403787 | orchestrator | 15:19:24.403 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-17 15:19:24.404068 | orchestrator | 15:19:24.403 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-17 15:19:24.404165 | orchestrator | 15:19:24.403 STDOUT terraform:  # (config refers to values not yet known) 2025-09-17 15:19:24.404244 | orchestrator | 15:19:24.404 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-17 15:19:24.404307 | orchestrator | 15:19:24.404 STDOUT terraform:  + checksum = (known after apply) 2025-09-17 15:19:24.404428 | orchestrator | 15:19:24.404 STDOUT terraform:  + created_at = (known after apply) 2025-09-17 15:19:24.404521 | orchestrator | 15:19:24.404 STDOUT terraform:  + file = (known after apply) 2025-09-17 15:19:24.404654 | orchestrator | 15:19:24.404 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.404723 | orchestrator | 15:19:24.404 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.404789 | orchestrator | 15:19:24.404 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-17 15:19:24.404930 | orchestrator | 15:19:24.404 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-17 15:19:24.405001 | orchestrator | 15:19:24.404 STDOUT terraform:  + most_recent = true 2025-09-17 15:19:24.405050 | orchestrator | 15:19:24.404 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.405168 | orchestrator | 15:19:24.405 STDOUT terraform:  + protected = (known after apply) 2025-09-17 15:19:24.405351 | orchestrator | 15:19:24.405 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.405430 | orchestrator | 15:19:24.405 STDOUT terraform:  + schema = (known after apply) 2025-09-17 15:19:24.405532 | orchestrator | 15:19:24.405 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-17 15:19:24.405615 | orchestrator | 15:19:24.405 STDOUT terraform:  + tags = (known after apply) 2025-09-17 15:19:24.405723 | orchestrator | 15:19:24.405 STDOUT terraform:  + updated_at = (known after apply) 2025-09-17 15:19:24.405790 | orchestrator | 15:19:24.405 STDOUT terraform:  } 2025-09-17 15:19:24.405955 | orchestrator | 15:19:24.405 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-17 15:19:24.406129 | orchestrator | 15:19:24.405 STDOUT terraform:  # (config refers to values not yet known) 2025-09-17 15:19:24.406210 | orchestrator | 15:19:24.406 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-17 15:19:24.406445 | orchestrator | 15:19:24.406 STDOUT terraform:  + checksum = (known after apply) 2025-09-17 15:19:24.406547 | orchestrator | 15:19:24.406 STDOUT terraform:  + created_at = (known after apply) 2025-09-17 15:19:24.406580 | orchestrator | 15:19:24.406 STDOUT terraform:  + file = (known after apply) 2025-09-17 15:19:24.406660 | orchestrator | 15:19:24.406 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.406837 | orchestrator | 15:19:24.406 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.406853 | orchestrator | 15:19:24.406 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-17 15:19:24.406947 | orchestrator | 15:19:24.406 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-17 15:19:24.407146 | orchestrator | 15:19:24.406 STDOUT terraform:  + most_recent = true 2025-09-17 15:19:24.407170 | orchestrator | 15:19:24.407 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.407276 | orchestrator | 15:19:24.407 STDOUT terraform:  + protected = (known after apply) 2025-09-17 15:19:24.407297 | orchestrator | 15:19:24.407 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.407493 | orchestrator | 15:19:24.407 STDOUT terraform:  + schema = (known after apply) 2025-09-17 15:19:24.407513 | orchestrator | 15:19:24.407 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-17 15:19:24.407598 | orchestrator | 15:19:24.407 STDOUT terraform:  + tags = (known after apply) 2025-09-17 15:19:24.407660 | orchestrator | 15:19:24.407 STDOUT terraform:  + updated_at = (known after apply) 2025-09-17 15:19:24.407678 | orchestrator | 15:19:24.407 STDOUT terraform:  } 2025-09-17 15:19:24.407754 | orchestrator | 15:19:24.407 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-17 15:19:24.407847 | orchestrator | 15:19:24.407 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-17 15:19:24.407910 | orchestrator | 15:19:24.407 STDOUT terraform:  + content = (known after apply) 2025-09-17 15:19:24.407984 | orchestrator | 15:19:24.407 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 15:19:24.408072 | orchestrator | 15:19:24.407 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 15:19:24.408174 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 15:19:24.408515 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 15:19:24.408532 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 15:19:24.408544 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 15:19:24.408556 | orchestrator | 15:19:24.408 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 15:19:24.408572 | orchestrator | 15:19:24.408 STDOUT terraform:  + file_permission = "0644" 2025-09-17 15:19:24.408622 | orchestrator | 15:19:24.408 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-17 15:19:24.408832 | orchestrator | 15:19:24.408 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.408847 | orchestrator | 15:19:24.408 STDOUT terraform:  } 2025-09-17 15:19:24.408859 | orchestrator | 15:19:24.408 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-17 15:19:24.408870 | orchestrator | 15:19:24.408 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-17 15:19:24.408884 | orchestrator | 15:19:24.408 STDOUT terraform:  + content = (known after apply) 2025-09-17 15:19:24.409294 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 15:19:24.409317 | orchestrator | 15:19:24.408 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 15:19:24.409336 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 15:19:24.409355 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 15:19:24.409374 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 15:19:24.409407 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 15:19:24.409444 | orchestrator | 15:19:24.409 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 15:19:24.409465 | orchestrator | 15:19:24.409 STDOUT terraform:  + file_permission = "0644" 2025-09-17 15:19:24.409485 | orchestrator | 15:19:24.409 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-17 15:19:24.409512 | orchestrator | 15:19:24.409 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.409532 | orchestrator | 15:19:24.409 STDOUT terraform:  } 2025-09-17 15:19:24.409556 | orchestrator | 15:19:24.409 STDOUT terraform:  # local_file.inventory will be created 2025-09-17 15:19:24.409597 | orchestrator | 15:19:24.409 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-17 15:19:24.410087 | orchestrator | 15:19:24.409 STDOUT terraform:  + content = (known after apply) 2025-09-17 15:19:24.410127 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 15:19:24.410139 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 15:19:24.410150 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 15:19:24.410161 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 15:19:24.410171 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 15:19:24.410182 | orchestrator | 15:19:24.409 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 15:19:24.410199 | orchestrator | 15:19:24.410 STDOUT terraform:  + directory_permission = "0777" 2025-09-17 15:19:24.410211 | orchestrator | 15:19:24.410 STDOUT terraform:  + file_permission = "0644" 2025-09-17 15:19:24.410281 | orchestrator | 15:19:24.410 STDOUT terraform:  + filename = "inventory.ci" 2025-09-17 15:19:24.410296 | orchestrator | 15:19:24.410 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.410307 | orchestrator | 15:19:24.410 STDOUT terraform:  } 2025-09-17 15:19:24.410376 | orchestrator | 15:19:24.410 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-17 15:19:24.410394 | orchestrator | 15:19:24.410 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-17 15:19:24.410467 | orchestrator | 15:19:24.410 STDOUT terraform:  + content = (sensitive value) 2025-09-17 15:19:24.410530 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-17 15:19:24.410598 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-17 15:19:24.410666 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-17 15:19:24.410737 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-17 15:19:24.410806 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-17 15:19:24.410878 | orchestrator | 15:19:24.410 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-17 15:19:24.410920 | orchestrator | 15:19:24.410 STDOUT terraform:  + directory_permission = "0700" 2025-09-17 15:19:24.410976 | orchestrator | 15:19:24.410 STDOUT terraform:  + file_permission = "0600" 2025-09-17 15:19:24.411020 | orchestrator | 15:19:24.410 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-17 15:19:24.411112 | orchestrator | 15:19:24.411 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.411127 | orchestrator | 15:19:24.411 STDOUT terraform:  } 2025-09-17 15:19:24.411169 | orchestrator | 15:19:24.411 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-17 15:19:24.411240 | orchestrator | 15:19:24.411 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-17 15:19:24.411302 | orchestrator | 15:19:24.411 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.411317 | orchestrator | 15:19:24.411 STDOUT terraform:  } 2025-09-17 15:19:24.411538 | orchestrator | 15:19:24.411 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-17 15:19:24.411571 | orchestrator | 15:19:24.411 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-17 15:19:24.411652 | orchestrator | 15:19:24.411 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.411667 | orchestrator | 15:19:24.411 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.411746 | orchestrator | 15:19:24.411 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.411805 | orchestrator | 15:19:24.411 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.411865 | orchestrator | 15:19:24.411 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.411942 | orchestrator | 15:19:24.411 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-17 15:19:24.412003 | orchestrator | 15:19:24.411 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.412017 | orchestrator | 15:19:24.411 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.412066 | orchestrator | 15:19:24.412 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.412102 | orchestrator | 15:19:24.412 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.412112 | orchestrator | 15:19:24.412 STDOUT terraform:  } 2025-09-17 15:19:24.412194 | orchestrator | 15:19:24.412 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-17 15:19:24.412315 | orchestrator | 15:19:24.412 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.412379 | orchestrator | 15:19:24.412 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.412415 | orchestrator | 15:19:24.412 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.412473 | orchestrator | 15:19:24.412 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.412536 | orchestrator | 15:19:24.412 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.412602 | orchestrator | 15:19:24.412 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.412673 | orchestrator | 15:19:24.412 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-17 15:19:24.412734 | orchestrator | 15:19:24.412 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.412748 | orchestrator | 15:19:24.412 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.412801 | orchestrator | 15:19:24.412 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.412836 | orchestrator | 15:19:24.412 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.412849 | orchestrator | 15:19:24.412 STDOUT terraform:  } 2025-09-17 15:19:24.412931 | orchestrator | 15:19:24.412 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-17 15:19:24.413008 | orchestrator | 15:19:24.412 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.413066 | orchestrator | 15:19:24.412 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.413086 | orchestrator | 15:19:24.413 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.413162 | orchestrator | 15:19:24.413 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.413235 | orchestrator | 15:19:24.413 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.413292 | orchestrator | 15:19:24.413 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.413370 | orchestrator | 15:19:24.413 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-17 15:19:24.413424 | orchestrator | 15:19:24.413 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.413436 | orchestrator | 15:19:24.413 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.413491 | orchestrator | 15:19:24.413 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.413526 | orchestrator | 15:19:24.413 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.413535 | orchestrator | 15:19:24.413 STDOUT terraform:  } 2025-09-17 15:19:24.413616 | orchestrator | 15:19:24.413 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-17 15:19:24.413691 | orchestrator | 15:19:24.413 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.413757 | orchestrator | 15:19:24.413 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.413776 | orchestrator | 15:19:24.413 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.413847 | orchestrator | 15:19:24.413 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.413898 | orchestrator | 15:19:24.413 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.413960 | orchestrator | 15:19:24.413 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.414048 | orchestrator | 15:19:24.413 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-17 15:19:24.414105 | orchestrator | 15:19:24.414 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.414118 | orchestrator | 15:19:24.414 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.414172 | orchestrator | 15:19:24.414 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.414207 | orchestrator | 15:19:24.414 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.414260 | orchestrator | 15:19:24.414 STDOUT terraform:  } 2025-09-17 15:19:24.414334 | orchestrator | 15:19:24.414 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-17 15:19:24.414406 | orchestrator | 15:19:24.414 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.414463 | orchestrator | 15:19:24.414 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.414494 | orchestrator | 15:19:24.414 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.414550 | orchestrator | 15:19:24.414 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.414605 | orchestrator | 15:19:24.414 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.414659 | orchestrator | 15:19:24.414 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.414728 | orchestrator | 15:19:24.414 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-17 15:19:24.414784 | orchestrator | 15:19:24.414 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.414821 | orchestrator | 15:19:24.414 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.414858 | orchestrator | 15:19:24.414 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.414894 | orchestrator | 15:19:24.414 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.414902 | orchestrator | 15:19:24.414 STDOUT terraform:  } 2025-09-17 15:19:24.414974 | orchestrator | 15:19:24.414 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-17 15:19:24.415044 | orchestrator | 15:19:24.414 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.415099 | orchestrator | 15:19:24.415 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.415136 | orchestrator | 15:19:24.415 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.415190 | orchestrator | 15:19:24.415 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.415255 | orchestrator | 15:19:24.415 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.415311 | orchestrator | 15:19:24.415 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.415381 | orchestrator | 15:19:24.415 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-17 15:19:24.415436 | orchestrator | 15:19:24.415 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.415465 | orchestrator | 15:19:24.415 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.415502 | orchestrator | 15:19:24.415 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.415540 | orchestrator | 15:19:24.415 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.415550 | orchestrator | 15:19:24.415 STDOUT terraform:  } 2025-09-17 15:19:24.415627 | orchestrator | 15:19:24.415 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-17 15:19:24.415697 | orchestrator | 15:19:24.415 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-17 15:19:24.415751 | orchestrator | 15:19:24.415 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.415780 | orchestrator | 15:19:24.415 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.415843 | orchestrator | 15:19:24.415 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.415891 | orchestrator | 15:19:24.415 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.415946 | orchestrator | 15:19:24.415 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.416016 | orchestrator | 15:19:24.415 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-17 15:19:24.416071 | orchestrator | 15:19:24.416 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.416107 | orchestrator | 15:19:24.416 STDOUT terraform:  + size = 80 2025-09-17 15:19:24.416123 | orchestrator | 15:19:24.416 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.416170 | orchestrator | 15:19:24.416 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.416181 | orchestrator | 15:19:24.416 STDOUT terraform:  } 2025-09-17 15:19:24.416267 | orchestrator | 15:19:24.416 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-17 15:19:24.416333 | orchestrator | 15:19:24.416 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.416386 | orchestrator | 15:19:24.416 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.416415 | orchestrator | 15:19:24.416 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.416473 | orchestrator | 15:19:24.416 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.416527 | orchestrator | 15:19:24.416 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.416594 | orchestrator | 15:19:24.416 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-17 15:19:24.416643 | orchestrator | 15:19:24.416 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.416672 | orchestrator | 15:19:24.416 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.416701 | orchestrator | 15:19:24.416 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.416739 | orchestrator | 15:19:24.416 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.416750 | orchestrator | 15:19:24.416 STDOUT terraform:  } 2025-09-17 15:19:24.416823 | orchestrator | 15:19:24.416 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-17 15:19:24.416892 | orchestrator | 15:19:24.416 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.416937 | orchestrator | 15:19:24.416 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.416971 | orchestrator | 15:19:24.416 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.417026 | orchestrator | 15:19:24.416 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.417080 | orchestrator | 15:19:24.417 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.417139 | orchestrator | 15:19:24.417 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-17 15:19:24.417194 | orchestrator | 15:19:24.417 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.417239 | orchestrator | 15:19:24.417 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.417279 | orchestrator | 15:19:24.417 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.417311 | orchestrator | 15:19:24.417 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.417321 | orchestrator | 15:19:24.417 STDOUT terraform:  } 2025-09-17 15:19:24.417394 | orchestrator | 15:19:24.417 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-17 15:19:24.417461 | orchestrator | 15:19:24.417 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.417515 | orchestrator | 15:19:24.417 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.417551 | orchestrator | 15:19:24.417 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.417607 | orchestrator | 15:19:24.417 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.417662 | orchestrator | 15:19:24.417 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.417721 | orchestrator | 15:19:24.417 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-17 15:19:24.417775 | orchestrator | 15:19:24.417 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.417807 | orchestrator | 15:19:24.417 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.417844 | orchestrator | 15:19:24.417 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.417881 | orchestrator | 15:19:24.417 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.417892 | orchestrator | 15:19:24.417 STDOUT terraform:  } 2025-09-17 15:19:24.417966 | orchestrator | 15:19:24.417 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-17 15:19:24.418178 | orchestrator | 15:19:24.417 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.418269 | orchestrator | 15:19:24.418 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.418309 | orchestrator | 15:19:24.418 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.418366 | orchestrator | 15:19:24.418 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.418422 | orchestrator | 15:19:24.418 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.418485 | orchestrator | 15:19:24.418 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-17 15:19:24.418537 | orchestrator | 15:19:24.418 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.418553 | orchestrator | 15:19:24.418 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.418589 | orchestrator | 15:19:24.418 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.418622 | orchestrator | 15:19:24.418 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.418632 | orchestrator | 15:19:24.418 STDOUT terraform:  } 2025-09-17 15:19:24.418699 | orchestrator | 15:19:24.418 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-17 15:19:24.418758 | orchestrator | 15:19:24.418 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.418808 | orchestrator | 15:19:24.418 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.418841 | orchestrator | 15:19:24.418 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.418890 | orchestrator | 15:19:24.418 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.418938 | orchestrator | 15:19:24.418 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.419003 | orchestrator | 15:19:24.418 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-17 15:19:24.419038 | orchestrator | 15:19:24.418 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.419060 | orchestrator | 15:19:24.419 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.419096 | orchestrator | 15:19:24.419 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.419130 | orchestrator | 15:19:24.419 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.419146 | orchestrator | 15:19:24.419 STDOUT terraform:  } 2025-09-17 15:19:24.419202 | orchestrator | 15:19:24.419 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-17 15:19:24.419272 | orchestrator | 15:19:24.419 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.419321 | orchestrator | 15:19:24.419 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.419352 | orchestrator | 15:19:24.419 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.419401 | orchestrator | 15:19:24.419 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.419448 | orchestrator | 15:19:24.419 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.419503 | orchestrator | 15:19:24.419 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-17 15:19:24.419550 | orchestrator | 15:19:24.419 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.419578 | orchestrator | 15:19:24.419 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.419610 | orchestrator | 15:19:24.419 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.419642 | orchestrator | 15:19:24.419 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.419652 | orchestrator | 15:19:24.419 STDOUT terraform:  } 2025-09-17 15:19:24.419746 | orchestrator | 15:19:24.419 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-17 15:19:24.419774 | orchestrator | 15:19:24.419 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.419822 | orchestrator | 15:19:24.419 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.419854 | orchestrator | 15:19:24.419 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.419904 | orchestrator | 15:19:24.419 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.419952 | orchestrator | 15:19:24.419 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.420003 | orchestrator | 15:19:24.419 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-17 15:19:24.420052 | orchestrator | 15:19:24.419 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.420081 | orchestrator | 15:19:24.420 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.420114 | orchestrator | 15:19:24.420 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.420146 | orchestrator | 15:19:24.420 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.420155 | orchestrator | 15:19:24.420 STDOUT terraform:  } 2025-09-17 15:19:24.420235 | orchestrator | 15:19:24.420 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-17 15:19:24.420288 | orchestrator | 15:19:24.420 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.420336 | orchestrator | 15:19:24.420 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.420369 | orchestrator | 15:19:24.420 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.420417 | orchestrator | 15:19:24.420 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.420466 | orchestrator | 15:19:24.420 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.420519 | orchestrator | 15:19:24.420 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-17 15:19:24.420567 | orchestrator | 15:19:24.420 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.420601 | orchestrator | 15:19:24.420 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.420631 | orchestrator | 15:19:24.420 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.420661 | orchestrator | 15:19:24.420 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.420671 | orchestrator | 15:19:24.420 STDOUT terraform:  } 2025-09-17 15:19:24.420735 | orchestrator | 15:19:24.420 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-17 15:19:24.420794 | orchestrator | 15:19:24.420 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-17 15:19:24.420839 | orchestrator | 15:19:24.420 STDOUT terraform:  + attachment = (known after apply) 2025-09-17 15:19:24.420872 | orchestrator | 15:19:24.420 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.420921 | orchestrator | 15:19:24.420 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.420968 | orchestrator | 15:19:24.420 STDOUT terraform:  + metadata = (known after apply) 2025-09-17 15:19:24.421022 | orchestrator | 15:19:24.420 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-17 15:19:24.421071 | orchestrator | 15:19:24.421 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.421099 | orchestrator | 15:19:24.421 STDOUT terraform:  + size = 20 2025-09-17 15:19:24.421131 | orchestrator | 15:19:24.421 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-17 15:19:24.421163 | orchestrator | 15:19:24.421 STDOUT terraform:  + volume_type = "ssd" 2025-09-17 15:19:24.421173 | orchestrator | 15:19:24.421 STDOUT terraform:  } 2025-09-17 15:19:24.421260 | orchestrator | 15:19:24.421 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-17 15:19:24.421306 | orchestrator | 15:19:24.421 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-17 15:19:24.421352 | orchestrator | 15:19:24.421 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.421399 | orchestrator | 15:19:24.421 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.421445 | orchestrator | 15:19:24.421 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.421496 | orchestrator | 15:19:24.421 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.421528 | orchestrator | 15:19:24.421 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.421556 | orchestrator | 15:19:24.421 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.421603 | orchestrator | 15:19:24.421 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.421651 | orchestrator | 15:19:24.421 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.421691 | orchestrator | 15:19:24.421 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-17 15:19:24.421722 | orchestrator | 15:19:24.421 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.421769 | orchestrator | 15:19:24.421 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.421818 | orchestrator | 15:19:24.421 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.421865 | orchestrator | 15:19:24.421 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.421912 | orchestrator | 15:19:24.421 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.421946 | orchestrator | 15:19:24.421 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.421988 | orchestrator | 15:19:24.421 STDOUT terraform:  + name = "testbed-manager" 2025-09-17 15:19:24.422043 | orchestrator | 15:19:24.421 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.422108 | orchestrator | 15:19:24.422 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.422155 | orchestrator | 15:19:24.422 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.422185 | orchestrator | 15:19:24.422 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.422262 | orchestrator | 15:19:24.422 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.422304 | orchestrator | 15:19:24.422 STDOUT terraform:  + user_data = (sensitive value) 2025-09-17 15:19:24.422327 | orchestrator | 15:19:24.422 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.422358 | orchestrator | 15:19:24.422 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.422391 | orchestrator | 15:19:24.422 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.422433 | orchestrator | 15:19:24.422 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.422463 | orchestrator | 15:19:24.422 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.422499 | orchestrator | 15:19:24.422 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.422545 | orchestrator | 15:19:24.422 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.422554 | orchestrator | 15:19:24.422 STDOUT terraform:  } 2025-09-17 15:19:24.422576 | orchestrator | 15:19:24.422 STDOUT terraform:  + network { 2025-09-17 15:19:24.422600 | orchestrator | 15:19:24.422 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.422638 | orchestrator | 15:19:24.422 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.422681 | orchestrator | 15:19:24.422 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.422714 | orchestrator | 15:19:24.422 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.422752 | orchestrator | 15:19:24.422 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.422790 | orchestrator | 15:19:24.422 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.422827 | orchestrator | 15:19:24.422 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.422836 | orchestrator | 15:19:24.422 STDOUT terraform:  } 2025-09-17 15:19:24.422857 | orchestrator | 15:19:24.422 STDOUT terraform:  } 2025-09-17 15:19:24.422911 | orchestrator | 15:19:24.422 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-17 15:19:24.422964 | orchestrator | 15:19:24.422 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.423007 | orchestrator | 15:19:24.422 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.423049 | orchestrator | 15:19:24.423 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.423091 | orchestrator | 15:19:24.423 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.423134 | orchestrator | 15:19:24.423 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.423163 | orchestrator | 15:19:24.423 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.423190 | orchestrator | 15:19:24.423 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.423244 | orchestrator | 15:19:24.423 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.423286 | orchestrator | 15:19:24.423 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.423316 | orchestrator | 15:19:24.423 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.423345 | orchestrator | 15:19:24.423 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.423385 | orchestrator | 15:19:24.423 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.423427 | orchestrator | 15:19:24.423 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.423469 | orchestrator | 15:19:24.423 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.423513 | orchestrator | 15:19:24.423 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.423543 | orchestrator | 15:19:24.423 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.423580 | orchestrator | 15:19:24.423 STDOUT terraform:  + name = "testbed-node-0" 2025-09-17 15:19:24.423610 | orchestrator | 15:19:24.423 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.423653 | orchestrator | 15:19:24.423 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.423694 | orchestrator | 15:19:24.423 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.423722 | orchestrator | 15:19:24.423 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.423765 | orchestrator | 15:19:24.423 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.423830 | orchestrator | 15:19:24.423 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.423850 | orchestrator | 15:19:24.423 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.423879 | orchestrator | 15:19:24.423 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.423914 | orchestrator | 15:19:24.423 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.423950 | orchestrator | 15:19:24.423 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.423985 | orchestrator | 15:19:24.423 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.424020 | orchestrator | 15:19:24.423 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.424067 | orchestrator | 15:19:24.424 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.424076 | orchestrator | 15:19:24.424 STDOUT terraform:  } 2025-09-17 15:19:24.424095 | orchestrator | 15:19:24.424 STDOUT terraform:  + network { 2025-09-17 15:19:24.424120 | orchestrator | 15:19:24.424 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.424163 | orchestrator | 15:19:24.424 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.424195 | orchestrator | 15:19:24.424 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.424271 | orchestrator | 15:19:24.424 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.424283 | orchestrator | 15:19:24.424 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.424317 | orchestrator | 15:19:24.424 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.424354 | orchestrator | 15:19:24.424 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.424363 | orchestrator | 15:19:24.424 STDOUT terraform:  } 2025-09-17 15:19:24.424383 | orchestrator | 15:19:24.424 STDOUT terraform:  } 2025-09-17 15:19:24.424436 | orchestrator | 15:19:24.424 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-17 15:19:24.424488 | orchestrator | 15:19:24.424 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.424528 | orchestrator | 15:19:24.424 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.424570 | orchestrator | 15:19:24.424 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.424614 | orchestrator | 15:19:24.424 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.424656 | orchestrator | 15:19:24.424 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.424684 | orchestrator | 15:19:24.424 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.424709 | orchestrator | 15:19:24.424 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.424752 | orchestrator | 15:19:24.424 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.424794 | orchestrator | 15:19:24.424 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.424829 | orchestrator | 15:19:24.424 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.424860 | orchestrator | 15:19:24.424 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.424901 | orchestrator | 15:19:24.424 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.424944 | orchestrator | 15:19:24.424 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.424988 | orchestrator | 15:19:24.424 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.425031 | orchestrator | 15:19:24.424 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.425062 | orchestrator | 15:19:24.425 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.425098 | orchestrator | 15:19:24.425 STDOUT terraform:  + name = "testbed-node-1" 2025-09-17 15:19:24.425139 | orchestrator | 15:19:24.425 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.425172 | orchestrator | 15:19:24.425 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.425226 | orchestrator | 15:19:24.425 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.425253 | orchestrator | 15:19:24.425 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.425295 | orchestrator | 15:19:24.425 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.425356 | orchestrator | 15:19:24.425 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.425370 | orchestrator | 15:19:24.425 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.425398 | orchestrator | 15:19:24.425 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.425434 | orchestrator | 15:19:24.425 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.425469 | orchestrator | 15:19:24.425 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.425502 | orchestrator | 15:19:24.425 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.425541 | orchestrator | 15:19:24.425 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.425587 | orchestrator | 15:19:24.425 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.425601 | orchestrator | 15:19:24.425 STDOUT terraform:  } 2025-09-17 15:19:24.425608 | orchestrator | 15:19:24.425 STDOUT terraform:  + network { 2025-09-17 15:19:24.425636 | orchestrator | 15:19:24.425 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.425673 | orchestrator | 15:19:24.425 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.425709 | orchestrator | 15:19:24.425 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.425746 | orchestrator | 15:19:24.425 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.425784 | orchestrator | 15:19:24.425 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.425822 | orchestrator | 15:19:24.425 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.425859 | orchestrator | 15:19:24.425 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.425872 | orchestrator | 15:19:24.425 STDOUT terraform:  } 2025-09-17 15:19:24.425884 | orchestrator | 15:19:24.425 STDOUT terraform:  } 2025-09-17 15:19:24.425937 | orchestrator | 15:19:24.425 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-17 15:19:24.426003 | orchestrator | 15:19:24.425 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.426050 | orchestrator | 15:19:24.425 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.426094 | orchestrator | 15:19:24.426 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.426163 | orchestrator | 15:19:24.426 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.426298 | orchestrator | 15:19:24.426 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.426313 | orchestrator | 15:19:24.426 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.426341 | orchestrator | 15:19:24.426 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.426434 | orchestrator | 15:19:24.426 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.426496 | orchestrator | 15:19:24.426 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.426562 | orchestrator | 15:19:24.426 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.426598 | orchestrator | 15:19:24.426 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.426640 | orchestrator | 15:19:24.426 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.426680 | orchestrator | 15:19:24.426 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.426717 | orchestrator | 15:19:24.426 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.426767 | orchestrator | 15:19:24.426 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.426789 | orchestrator | 15:19:24.426 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.426825 | orchestrator | 15:19:24.426 STDOUT terraform:  + name = "testbed-node-2" 2025-09-17 15:19:24.426854 | orchestrator | 15:19:24.426 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.426893 | orchestrator | 15:19:24.426 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.426967 | orchestrator | 15:19:24.426 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.426992 | orchestrator | 15:19:24.426 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.427040 | orchestrator | 15:19:24.426 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.427094 | orchestrator | 15:19:24.427 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.427115 | orchestrator | 15:19:24.427 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.427191 | orchestrator | 15:19:24.427 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.427199 | orchestrator | 15:19:24.427 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.427206 | orchestrator | 15:19:24.427 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.427297 | orchestrator | 15:19:24.427 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.427313 | orchestrator | 15:19:24.427 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.427321 | orchestrator | 15:19:24.427 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.427387 | orchestrator | 15:19:24.427 STDOUT terraform:  } 2025-09-17 15:19:24.427404 | orchestrator | 15:19:24.427 STDOUT terraform:  + network { 2025-09-17 15:19:24.427418 | orchestrator | 15:19:24.427 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.427424 | orchestrator | 15:19:24.427 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.427530 | orchestrator | 15:19:24.427 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.427539 | orchestrator | 15:19:24.427 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.427544 | orchestrator | 15:19:24.427 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.427607 | orchestrator | 15:19:24.427 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.427616 | orchestrator | 15:19:24.427 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.427630 | orchestrator | 15:19:24.427 STDOUT terraform:  } 2025-09-17 15:19:24.427647 | orchestrator | 15:19:24.427 STDOUT terraform:  } 2025-09-17 15:19:24.427653 | orchestrator | 15:19:24.427 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-17 15:19:24.427679 | orchestrator | 15:19:24.427 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.427688 | orchestrator | 15:19:24.427 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.427741 | orchestrator | 15:19:24.427 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.427794 | orchestrator | 15:19:24.427 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.427805 | orchestrator | 15:19:24.427 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.427874 | orchestrator | 15:19:24.427 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.428028 | orchestrator | 15:19:24.427 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.428044 | orchestrator | 15:19:24.427 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.428053 | orchestrator | 15:19:24.427 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.428066 | orchestrator | 15:19:24.427 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.428073 | orchestrator | 15:19:24.427 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.428085 | orchestrator | 15:19:24.427 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.428094 | orchestrator | 15:19:24.427 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.428103 | orchestrator | 15:19:24.428 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.428115 | orchestrator | 15:19:24.428 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.428124 | orchestrator | 15:19:24.428 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.428187 | orchestrator | 15:19:24.428 STDOUT terraform:  + name = "testbed-node-3" 2025-09-17 15:19:24.428209 | orchestrator | 15:19:24.428 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.428287 | orchestrator | 15:19:24.428 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.428300 | orchestrator | 15:19:24.428 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.428309 | orchestrator | 15:19:24.428 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.428320 | orchestrator | 15:19:24.428 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.428374 | orchestrator | 15:19:24.428 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.428396 | orchestrator | 15:19:24.428 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.428409 | orchestrator | 15:19:24.428 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.428441 | orchestrator | 15:19:24.428 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.428474 | orchestrator | 15:19:24.428 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.428520 | orchestrator | 15:19:24.428 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.428533 | orchestrator | 15:19:24.428 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.428623 | orchestrator | 15:19:24.428 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.428654 | orchestrator | 15:19:24.428 STDOUT terraform:  } 2025-09-17 15:19:24.428663 | orchestrator | 15:19:24.428 STDOUT terraform:  + network { 2025-09-17 15:19:24.428671 | orchestrator | 15:19:24.428 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.428683 | orchestrator | 15:19:24.428 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.428691 | orchestrator | 15:19:24.428 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.428723 | orchestrator | 15:19:24.428 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.428757 | orchestrator | 15:19:24.428 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.428816 | orchestrator | 15:19:24.428 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.428825 | orchestrator | 15:19:24.428 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.428835 | orchestrator | 15:19:24.428 STDOUT terraform:  } 2025-09-17 15:19:24.428842 | orchestrator | 15:19:24.428 STDOUT terraform:  } 2025-09-17 15:19:24.428928 | orchestrator | 15:19:24.428 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-17 15:19:24.428950 | orchestrator | 15:19:24.428 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.428988 | orchestrator | 15:19:24.428 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.429027 | orchestrator | 15:19:24.428 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.429068 | orchestrator | 15:19:24.429 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.429108 | orchestrator | 15:19:24.429 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.429207 | orchestrator | 15:19:24.429 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.429241 | orchestrator | 15:19:24.429 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.429246 | orchestrator | 15:19:24.429 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.429253 | orchestrator | 15:19:24.429 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.429285 | orchestrator | 15:19:24.429 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.429311 | orchestrator | 15:19:24.429 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.429352 | orchestrator | 15:19:24.429 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.429390 | orchestrator | 15:19:24.429 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.429442 | orchestrator | 15:19:24.429 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.429505 | orchestrator | 15:19:24.429 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.429512 | orchestrator | 15:19:24.429 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.429549 | orchestrator | 15:19:24.429 STDOUT terraform:  + name = "testbed-node-4" 2025-09-17 15:19:24.429560 | orchestrator | 15:19:24.429 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.429601 | orchestrator | 15:19:24.429 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.429659 | orchestrator | 15:19:24.429 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.429667 | orchestrator | 15:19:24.429 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.429724 | orchestrator | 15:19:24.429 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.429753 | orchestrator | 15:19:24.429 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.429820 | orchestrator | 15:19:24.429 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.429827 | orchestrator | 15:19:24.429 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.429834 | orchestrator | 15:19:24.429 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.429852 | orchestrator | 15:19:24.429 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.429887 | orchestrator | 15:19:24.429 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.429979 | orchestrator | 15:19:24.429 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.429991 | orchestrator | 15:19:24.429 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.429998 | orchestrator | 15:19:24.429 STDOUT terraform:  } 2025-09-17 15:19:24.430039 | orchestrator | 15:19:24.429 STDOUT terraform:  + network { 2025-09-17 15:19:24.430051 | orchestrator | 15:19:24.430 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.430091 | orchestrator | 15:19:24.430 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.430124 | orchestrator | 15:19:24.430 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.430159 | orchestrator | 15:19:24.430 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.430177 | orchestrator | 15:19:24.430 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.430232 | orchestrator | 15:19:24.430 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.430245 | orchestrator | 15:19:24.430 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.430300 | orchestrator | 15:19:24.430 STDOUT terraform:  } 2025-09-17 15:19:24.430307 | orchestrator | 15:19:24.430 STDOUT terraform:  } 2025-09-17 15:19:24.430314 | orchestrator | 15:19:24.430 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-17 15:19:24.430360 | orchestrator | 15:19:24.430 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-17 15:19:24.430395 | orchestrator | 15:19:24.430 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-17 15:19:24.430432 | orchestrator | 15:19:24.430 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-17 15:19:24.430469 | orchestrator | 15:19:24.430 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-17 15:19:24.430504 | orchestrator | 15:19:24.430 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.430530 | orchestrator | 15:19:24.430 STDOUT terraform:  + availability_zone = "nova" 2025-09-17 15:19:24.430551 | orchestrator | 15:19:24.430 STDOUT terraform:  + config_drive = true 2025-09-17 15:19:24.430589 | orchestrator | 15:19:24.430 STDOUT terraform:  + created = (known after apply) 2025-09-17 15:19:24.430626 | orchestrator | 15:19:24.430 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-17 15:19:24.430662 | orchestrator | 15:19:24.430 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-17 15:19:24.430684 | orchestrator | 15:19:24.430 STDOUT terraform:  + force_delete = false 2025-09-17 15:19:24.430716 | orchestrator | 15:19:24.430 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-17 15:19:24.430753 | orchestrator | 15:19:24.430 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.430824 | orchestrator | 15:19:24.430 STDOUT terraform:  + image_id = (known after apply) 2025-09-17 15:19:24.430831 | orchestrator | 15:19:24.430 STDOUT terraform:  + image_name = (known after apply) 2025-09-17 15:19:24.430852 | orchestrator | 15:19:24.430 STDOUT terraform:  + key_pair = "testbed" 2025-09-17 15:19:24.430886 | orchestrator | 15:19:24.430 STDOUT terraform:  + name = "testbed-node-5" 2025-09-17 15:19:24.430902 | orchestrator | 15:19:24.430 STDOUT terraform:  + power_state = "active" 2025-09-17 15:19:24.430976 | orchestrator | 15:19:24.430 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.430985 | orchestrator | 15:19:24.430 STDOUT terraform:  + security_groups = (known after apply) 2025-09-17 15:19:24.430990 | orchestrator | 15:19:24.430 STDOUT terraform:  + stop_before_destroy = false 2025-09-17 15:19:24.431016 | orchestrator | 15:19:24.430 STDOUT terraform:  + updated = (known after apply) 2025-09-17 15:19:24.431110 | orchestrator | 15:19:24.431 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-17 15:19:24.431117 | orchestrator | 15:19:24.431 STDOUT terraform:  + block_device { 2025-09-17 15:19:24.431126 | orchestrator | 15:19:24.431 STDOUT terraform:  + boot_index = 0 2025-09-17 15:19:24.431159 | orchestrator | 15:19:24.431 STDOUT terraform:  + delete_on_termination = false 2025-09-17 15:19:24.431168 | orchestrator | 15:19:24.431 STDOUT terraform:  + destination_type = "volume" 2025-09-17 15:19:24.431174 | orchestrator | 15:19:24.431 STDOUT terraform:  + multiattach = false 2025-09-17 15:19:24.431210 | orchestrator | 15:19:24.431 STDOUT terraform:  + source_type = "volume" 2025-09-17 15:19:24.431264 | orchestrator | 15:19:24.431 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.431316 | orchestrator | 15:19:24.431 STDOUT terraform:  } 2025-09-17 15:19:24.431324 | orchestrator | 15:19:24.431 STDOUT terraform:  + network { 2025-09-17 15:19:24.431354 | orchestrator | 15:19:24.431 STDOUT terraform:  + access_network = false 2025-09-17 15:19:24.431365 | orchestrator | 15:19:24.431 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-17 15:19:24.431370 | orchestrator | 15:19:24.431 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-17 15:19:24.431393 | orchestrator | 15:19:24.431 STDOUT terraform:  + mac = (known after apply) 2025-09-17 15:19:24.431428 | orchestrator | 15:19:24.431 STDOUT terraform:  + name = (known after apply) 2025-09-17 15:19:24.431470 | orchestrator | 15:19:24.431 STDOUT terraform:  + port = (known after apply) 2025-09-17 15:19:24.431479 | orchestrator | 15:19:24.431 STDOUT terraform:  + uuid = (known after apply) 2025-09-17 15:19:24.431503 | orchestrator | 15:19:24.431 STDOUT terraform:  } 2025-09-17 15:19:24.431514 | orchestrator | 15:19:24.431 STDOUT terraform:  } 2025-09-17 15:19:24.431553 | orchestrator | 15:19:24.431 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-17 15:19:24.431590 | orchestrator | 15:19:24.431 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-17 15:19:24.431618 | orchestrator | 15:19:24.431 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-17 15:19:24.431655 | orchestrator | 15:19:24.431 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.431664 | orchestrator | 15:19:24.431 STDOUT terraform:  + name = "testbed" 2025-09-17 15:19:24.431711 | orchestrator | 15:19:24.431 STDOUT terraform:  + private_key = (sensitive value) 2025-09-17 15:19:24.431717 | orchestrator | 15:19:24.431 STDOUT terraform:  + public_key = (known after apply) 2025-09-17 15:19:24.431762 | orchestrator | 15:19:24.431 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.431773 | orchestrator | 15:19:24.431 STDOUT terraform:  + user_id = (known after apply) 2025-09-17 15:19:24.431780 | orchestrator | 15:19:24.431 STDOUT terraform:  } 2025-09-17 15:19:24.431828 | orchestrator | 15:19:24.431 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-17 15:19:24.431875 | orchestrator | 15:19:24.431 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.431928 | orchestrator | 15:19:24.431 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.431938 | orchestrator | 15:19:24.431 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.431998 | orchestrator | 15:19:24.431 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.432007 | orchestrator | 15:19:24.431 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.432016 | orchestrator | 15:19:24.431 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.432024 | orchestrator | 15:19:24.431 STDOUT terraform:  } 2025-09-17 15:19:24.432102 | orchestrator | 15:19:24.432 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-17 15:19:24.432109 | orchestrator | 15:19:24.432 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.432119 | orchestrator | 15:19:24.432 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.432174 | orchestrator | 15:19:24.432 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.432181 | orchestrator | 15:19:24.432 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.432201 | orchestrator | 15:19:24.432 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.432278 | orchestrator | 15:19:24.432 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.432286 | orchestrator | 15:19:24.432 STDOUT terraform:  } 2025-09-17 15:19:24.432292 | orchestrator | 15:19:24.432 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-17 15:19:24.432358 | orchestrator | 15:19:24.432 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.432365 | orchestrator | 15:19:24.432 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.432391 | orchestrator | 15:19:24.432 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.432501 | orchestrator | 15:19:24.432 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.432532 | orchestrator | 15:19:24.432 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.432618 | orchestrator | 15:19:24.432 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.432623 | orchestrator | 15:19:24.432 STDOUT terraform:  } 2025-09-17 15:19:24.432630 | orchestrator | 15:19:24.432 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-17 15:19:24.432635 | orchestrator | 15:19:24.432 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.432648 | orchestrator | 15:19:24.432 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.432661 | orchestrator | 15:19:24.432 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.432674 | orchestrator | 15:19:24.432 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.432704 | orchestrator | 15:19:24.432 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.432710 | orchestrator | 15:19:24.432 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.432714 | orchestrator | 15:19:24.432 STDOUT terraform:  } 2025-09-17 15:19:24.432721 | orchestrator | 15:19:24.432 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-17 15:19:24.432756 | orchestrator | 15:19:24.432 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.432792 | orchestrator | 15:19:24.432 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.432832 | orchestrator | 15:19:24.432 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.432897 | orchestrator | 15:19:24.432 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.432934 | orchestrator | 15:19:24.432 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.432939 | orchestrator | 15:19:24.432 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.432985 | orchestrator | 15:19:24.432 STDOUT terraform:  } 2025-09-17 15:19:24.432990 | orchestrator | 15:19:24.432 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-17 15:19:24.433012 | orchestrator | 15:19:24.432 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.433018 | orchestrator | 15:19:24.432 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.433054 | orchestrator | 15:19:24.432 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.433060 | orchestrator | 15:19:24.433 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.433066 | orchestrator | 15:19:24.433 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.433136 | orchestrator | 15:19:24.433 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.433142 | orchestrator | 15:19:24.433 STDOUT terraform:  } 2025-09-17 15:19:24.433197 | orchestrator | 15:19:24.433 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-17 15:19:24.433203 | orchestrator | 15:19:24.433 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.433252 | orchestrator | 15:19:24.433 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.433304 | orchestrator | 15:19:24.433 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.433311 | orchestrator | 15:19:24.433 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.433315 | orchestrator | 15:19:24.433 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.433400 | orchestrator | 15:19:24.433 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.433408 | orchestrator | 15:19:24.433 STDOUT terraform:  } 2025-09-17 15:19:24.433458 | orchestrator | 15:19:24.433 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-17 15:19:24.433463 | orchestrator | 15:19:24.433 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.433468 | orchestrator | 15:19:24.433 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.433507 | orchestrator | 15:19:24.433 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.433514 | orchestrator | 15:19:24.433 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.433519 | orchestrator | 15:19:24.433 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.433546 | orchestrator | 15:19:24.433 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.433553 | orchestrator | 15:19:24.433 STDOUT terraform:  } 2025-09-17 15:19:24.433601 | orchestrator | 15:19:24.433 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-17 15:19:24.433623 | orchestrator | 15:19:24.433 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-17 15:19:24.433638 | orchestrator | 15:19:24.433 STDOUT terraform:  + device = (known after apply) 2025-09-17 15:19:24.433726 | orchestrator | 15:19:24.433 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.433770 | orchestrator | 15:19:24.433 STDOUT terraform:  + instance_id = (known after apply) 2025-09-17 15:19:24.433778 | orchestrator | 15:19:24.433 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.433866 | orchestrator | 15:19:24.433 STDOUT terraform:  + volume_id = (known after apply) 2025-09-17 15:19:24.433883 | orchestrator | 15:19:24.433 STDOUT terraform:  } 2025-09-17 15:19:24.433905 | orchestrator | 15:19:24.433 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-17 15:19:24.433924 | orchestrator | 15:19:24.433 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-17 15:19:24.433962 | orchestrator | 15:19:24.433 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-17 15:19:24.433982 | orchestrator | 15:19:24.433 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-17 15:19:24.433999 | orchestrator | 15:19:24.433 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.434182 | orchestrator | 15:19:24.433 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 15:19:24.434195 | orchestrator | 15:19:24.433 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.434205 | orchestrator | 15:19:24.433 STDOUT terraform:  } 2025-09-17 15:19:24.434325 | orchestrator | 15:19:24.433 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-17 15:19:24.434377 | orchestrator | 15:19:24.434 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-17 15:19:24.434386 | orchestrator | 15:19:24.434 STDOUT terraform:  + address = (known after apply) 2025-09-17 15:19:24.434482 | orchestrator | 15:19:24.434 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.434494 | orchestrator | 15:19:24.434 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-17 15:19:24.434501 | orchestrator | 15:19:24.434 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.434513 | orchestrator | 15:19:24.434 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-17 15:19:24.434520 | orchestrator | 15:19:24.434 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.434526 | orchestrator | 15:19:24.434 STDOUT terraform:  + pool = "public" 2025-09-17 15:19:24.434534 | orchestrator | 15:19:24.434 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 15:19:24.434541 | orchestrator | 15:19:24.434 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.434547 | orchestrator | 15:19:24.434 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.434564 | orchestrator | 15:19:24.434 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.434571 | orchestrator | 15:19:24.434 STDOUT terraform:  } 2025-09-17 15:19:24.434577 | orchestrator | 15:19:24.434 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-17 15:19:24.434584 | orchestrator | 15:19:24.434 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-17 15:19:24.434590 | orchestrator | 15:19:24.434 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.434599 | orchestrator | 15:19:24.434 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.434605 | orchestrator | 15:19:24.434 STDOUT terraform:  + availability_zone_hints = [ 2025-09-17 15:19:24.434613 | orchestrator | 15:19:24.434 STDOUT terraform:  + "nova", 2025-09-17 15:19:24.434620 | orchestrator | 15:19:24.434 STDOUT terraform:  ] 2025-09-17 15:19:24.434628 | orchestrator | 15:19:24.434 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-17 15:19:24.434637 | orchestrator | 15:19:24.434 STDOUT terraform:  + external = (known after apply) 2025-09-17 15:19:24.434700 | orchestrator | 15:19:24.434 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.434711 | orchestrator | 15:19:24.434 STDOUT terraform:  + mtu = (known after apply) 2025-09-17 15:19:24.434842 | orchestrator | 15:19:24.434 STDOUT terraform:  + name = "net-testbed-management" 2025-09-17 15:19:24.434850 | orchestrator | 15:19:24.434 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.434856 | orchestrator | 15:19:24.434 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.434862 | orchestrator | 15:19:24.434 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.434870 | orchestrator | 15:19:24.434 STDOUT terraform:  + shared = (known after apply) 2025-09-17 15:19:24.434904 | orchestrator | 15:19:24.434 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.434992 | orchestrator | 15:19:24.434 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-17 15:19:24.435000 | orchestrator | 15:19:24.434 STDOUT terraform:  + segments (known after apply) 2025-09-17 15:19:24.435006 | orchestrator | 15:19:24.434 STDOUT terraform:  } 2025-09-17 15:19:24.435014 | orchestrator | 15:19:24.434 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-17 15:19:24.435047 | orchestrator | 15:19:24.434 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-17 15:19:24.435083 | orchestrator | 15:19:24.435 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.435119 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.435155 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.435181 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.435260 | orchestrator | 15:19:24.435 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.435284 | orchestrator | 15:19:24.435 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.435291 | orchestrator | 15:19:24.435 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.435362 | orchestrator | 15:19:24.435 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.435370 | orchestrator | 15:19:24.435 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.435379 | orchestrator | 15:19:24.435 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.435448 | orchestrator | 15:19:24.435 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.435455 | orchestrator | 15:19:24.435 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.435466 | orchestrator | 15:19:24.435 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.435504 | orchestrator | 15:19:24.435 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.435609 | orchestrator | 15:19:24.435 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.435617 | orchestrator | 15:19:24.435 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.435623 | orchestrator | 15:19:24.435 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.435632 | orchestrator | 15:19:24.435 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.435638 | orchestrator | 15:19:24.435 STDOUT terraform:  } 2025-09-17 15:19:24.435646 | orchestrator | 15:19:24.435 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.435652 | orchestrator | 15:19:24.435 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.435660 | orchestrator | 15:19:24.435 STDOUT terraform:  } 2025-09-17 15:19:24.435713 | orchestrator | 15:19:24.435 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.435721 | orchestrator | 15:19:24.435 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.435727 | orchestrator | 15:19:24.435 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-17 15:19:24.435735 | orchestrator | 15:19:24.435 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.435746 | orchestrator | 15:19:24.435 STDOUT terraform:  } 2025-09-17 15:19:24.435753 | orchestrator | 15:19:24.435 STDOUT terraform:  } 2025-09-17 15:19:24.435805 | orchestrator | 15:19:24.435 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-17 15:19:24.435871 | orchestrator | 15:19:24.435 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.435885 | orchestrator | 15:19:24.435 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.435923 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.435935 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.435972 | orchestrator | 15:19:24.435 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.436009 | orchestrator | 15:19:24.435 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.436038 | orchestrator | 15:19:24.435 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.436081 | orchestrator | 15:19:24.436 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.436109 | orchestrator | 15:19:24.436 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.436149 | orchestrator | 15:19:24.436 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.436188 | orchestrator | 15:19:24.436 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.436211 | orchestrator | 15:19:24.436 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.436257 | orchestrator | 15:19:24.436 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.436301 | orchestrator | 15:19:24.436 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.436310 | orchestrator | 15:19:24.436 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.436414 | orchestrator | 15:19:24.436 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.436421 | orchestrator | 15:19:24.436 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.436426 | orchestrator | 15:19:24.436 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.436432 | orchestrator | 15:19:24.436 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.436440 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436445 | orchestrator | 15:19:24.436 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.436452 | orchestrator | 15:19:24.436 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.436459 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436491 | orchestrator | 15:19:24.436 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.436517 | orchestrator | 15:19:24.436 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.436523 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436579 | orchestrator | 15:19:24.436 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.436589 | orchestrator | 15:19:24.436 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.436595 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436602 | orchestrator | 15:19:24.436 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.436609 | orchestrator | 15:19:24.436 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.436632 | orchestrator | 15:19:24.436 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-17 15:19:24.436663 | orchestrator | 15:19:24.436 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.436672 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436680 | orchestrator | 15:19:24.436 STDOUT terraform:  } 2025-09-17 15:19:24.436729 | orchestrator | 15:19:24.436 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-17 15:19:24.436787 | orchestrator | 15:19:24.436 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.436807 | orchestrator | 15:19:24.436 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.436833 | orchestrator | 15:19:24.436 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.436875 | orchestrator | 15:19:24.436 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.436937 | orchestrator | 15:19:24.436 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.436947 | orchestrator | 15:19:24.436 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.436981 | orchestrator | 15:19:24.436 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.437028 | orchestrator | 15:19:24.436 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.437036 | orchestrator | 15:19:24.436 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.437075 | orchestrator | 15:19:24.437 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.437103 | orchestrator | 15:19:24.437 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.437134 | orchestrator | 15:19:24.437 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.437164 | orchestrator | 15:19:24.437 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.437251 | orchestrator | 15:19:24.437 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.437263 | orchestrator | 15:19:24.437 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.437270 | orchestrator | 15:19:24.437 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.437300 | orchestrator | 15:19:24.437 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.437311 | orchestrator | 15:19:24.437 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.437346 | orchestrator | 15:19:24.437 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.437352 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437359 | orchestrator | 15:19:24.437 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.437418 | orchestrator | 15:19:24.437 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.437429 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437436 | orchestrator | 15:19:24.437 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.437444 | orchestrator | 15:19:24.437 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.437450 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437457 | orchestrator | 15:19:24.437 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.437495 | orchestrator | 15:19:24.437 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.437502 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437509 | orchestrator | 15:19:24.437 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.437528 | orchestrator | 15:19:24.437 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.437546 | orchestrator | 15:19:24.437 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-17 15:19:24.437596 | orchestrator | 15:19:24.437 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.437603 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437613 | orchestrator | 15:19:24.437 STDOUT terraform:  } 2025-09-17 15:19:24.437646 | orchestrator | 15:19:24.437 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-17 15:19:24.437737 | orchestrator | 15:19:24.437 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.437758 | orchestrator | 15:19:24.437 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.437766 | orchestrator | 15:19:24.437 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.437790 | orchestrator | 15:19:24.437 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.437800 | orchestrator | 15:19:24.437 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.437852 | orchestrator | 15:19:24.437 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.438073 | orchestrator | 15:19:24.437 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.438314 | orchestrator | 15:19:24.437 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.438473 | orchestrator | 15:19:24.437 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.438482 | orchestrator | 15:19:24.437 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.438488 | orchestrator | 15:19:24.437 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.438501 | orchestrator | 15:19:24.438 STDOUT terraform:  + network_id = (known after 2025-09-17 15:19:24.438509 | orchestrator | 15:19:24.438 STDOUT terraform:  apply) 2025-09-17 15:19:24.438516 | orchestrator | 15:19:24.438 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.438520 | orchestrator | 15:19:24.438 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.438524 | orchestrator | 15:19:24.438 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.438528 | orchestrator | 15:19:24.438 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.438532 | orchestrator | 15:19:24.438 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.438535 | orchestrator | 15:19:24.438 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.438539 | orchestrator | 15:19:24.438 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.438543 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438547 | orchestrator | 15:19:24.438 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.438550 | orchestrator | 15:19:24.438 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.438554 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438558 | orchestrator | 15:19:24.438 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.438571 | orchestrator | 15:19:24.438 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.438575 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438579 | orchestrator | 15:19:24.438 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.438583 | orchestrator | 15:19:24.438 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.438587 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438592 | orchestrator | 15:19:24.438 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.438598 | orchestrator | 15:19:24.438 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.438633 | orchestrator | 15:19:24.438 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-17 15:19:24.438690 | orchestrator | 15:19:24.438 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.438696 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438700 | orchestrator | 15:19:24.438 STDOUT terraform:  } 2025-09-17 15:19:24.438706 | orchestrator | 15:19:24.438 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-17 15:19:24.438757 | orchestrator | 15:19:24.438 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.438795 | orchestrator | 15:19:24.438 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.438825 | orchestrator | 15:19:24.438 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.438910 | orchestrator | 15:19:24.438 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.438918 | orchestrator | 15:19:24.438 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.438922 | orchestrator | 15:19:24.438 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.438950 | orchestrator | 15:19:24.438 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.438997 | orchestrator | 15:19:24.438 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.439020 | orchestrator | 15:19:24.438 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.439053 | orchestrator | 15:19:24.439 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.439108 | orchestrator | 15:19:24.439 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.439115 | orchestrator | 15:19:24.439 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.439154 | orchestrator | 15:19:24.439 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.439234 | orchestrator | 15:19:24.439 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.439242 | orchestrator | 15:19:24.439 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.439270 | orchestrator | 15:19:24.439 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.439318 | orchestrator | 15:19:24.439 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.439344 | orchestrator | 15:19:24.439 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.439398 | orchestrator | 15:19:24.439 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.439404 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439410 | orchestrator | 15:19:24.439 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.439413 | orchestrator | 15:19:24.439 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.439417 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439446 | orchestrator | 15:19:24.439 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.439461 | orchestrator | 15:19:24.439 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.439490 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439496 | orchestrator | 15:19:24.439 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.439527 | orchestrator | 15:19:24.439 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.439533 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439537 | orchestrator | 15:19:24.439 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.439577 | orchestrator | 15:19:24.439 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.439584 | orchestrator | 15:19:24.439 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-17 15:19:24.439588 | orchestrator | 15:19:24.439 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.439591 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439612 | orchestrator | 15:19:24.439 STDOUT terraform:  } 2025-09-17 15:19:24.439618 | orchestrator | 15:19:24.439 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-17 15:19:24.439657 | orchestrator | 15:19:24.439 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.439689 | orchestrator | 15:19:24.439 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.439730 | orchestrator | 15:19:24.439 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.439783 | orchestrator | 15:19:24.439 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.439791 | orchestrator | 15:19:24.439 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.439821 | orchestrator | 15:19:24.439 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.439849 | orchestrator | 15:19:24.439 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.439886 | orchestrator | 15:19:24.439 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.439919 | orchestrator | 15:19:24.439 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.439951 | orchestrator | 15:19:24.439 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.440010 | orchestrator | 15:19:24.439 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.440018 | orchestrator | 15:19:24.439 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.440052 | orchestrator | 15:19:24.440 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.440089 | orchestrator | 15:19:24.440 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.440133 | orchestrator | 15:19:24.440 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.440180 | orchestrator | 15:19:24.440 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.440187 | orchestrator | 15:19:24.440 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.440203 | orchestrator | 15:19:24.440 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.440255 | orchestrator | 15:19:24.440 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.440260 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440266 | orchestrator | 15:19:24.440 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.440294 | orchestrator | 15:19:24.440 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.440304 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440313 | orchestrator | 15:19:24.440 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.440341 | orchestrator | 15:19:24.440 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.440350 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440356 | orchestrator | 15:19:24.440 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.440405 | orchestrator | 15:19:24.440 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.440410 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440416 | orchestrator | 15:19:24.440 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.440424 | orchestrator | 15:19:24.440 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.440461 | orchestrator | 15:19:24.440 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-17 15:19:24.440492 | orchestrator | 15:19:24.440 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.440497 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440505 | orchestrator | 15:19:24.440 STDOUT terraform:  } 2025-09-17 15:19:24.440543 | orchestrator | 15:19:24.440 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-17 15:19:24.440594 | orchestrator | 15:19:24.440 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-17 15:19:24.440628 | orchestrator | 15:19:24.440 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.440675 | orchestrator | 15:19:24.440 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-17 15:19:24.440682 | orchestrator | 15:19:24.440 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-17 15:19:24.440743 | orchestrator | 15:19:24.440 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.440750 | orchestrator | 15:19:24.440 STDOUT terraform:  + device_id = (known after apply) 2025-09-17 15:19:24.440791 | orchestrator | 15:19:24.440 STDOUT terraform:  + device_owner = (known after apply) 2025-09-17 15:19:24.440830 | orchestrator | 15:19:24.440 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-17 15:19:24.440895 | orchestrator | 15:19:24.440 STDOUT terraform:  + dns_name = (known after apply) 2025-09-17 15:19:24.440901 | orchestrator | 15:19:24.440 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.440926 | orchestrator | 15:19:24.440 STDOUT terraform:  + mac_address = (known after apply) 2025-09-17 15:19:24.441004 | orchestrator | 15:19:24.440 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.441011 | orchestrator | 15:19:24.440 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-17 15:19:24.441019 | orchestrator | 15:19:24.440 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-17 15:19:24.441052 | orchestrator | 15:19:24.441 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.441101 | orchestrator | 15:19:24.441 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-17 15:19:24.441108 | orchestrator | 15:19:24.441 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.441136 | orchestrator | 15:19:24.441 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.441177 | orchestrator | 15:19:24.441 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-17 15:19:24.441182 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441188 | orchestrator | 15:19:24.441 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.441224 | orchestrator | 15:19:24.441 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-17 15:19:24.441232 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441264 | orchestrator | 15:19:24.441 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.441293 | orchestrator | 15:19:24.441 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-17 15:19:24.441299 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441318 | orchestrator | 15:19:24.441 STDOUT terraform:  + allowed_address_pairs { 2025-09-17 15:19:24.441338 | orchestrator | 15:19:24.441 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-17 15:19:24.441347 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441377 | orchestrator | 15:19:24.441 STDOUT terraform:  + binding (known after apply) 2025-09-17 15:19:24.441382 | orchestrator | 15:19:24.441 STDOUT terraform:  + fixed_ip { 2025-09-17 15:19:24.441408 | orchestrator | 15:19:24.441 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-17 15:19:24.441434 | orchestrator | 15:19:24.441 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.441439 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441445 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441486 | orchestrator | 15:19:24.441 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-17 15:19:24.441554 | orchestrator | 15:19:24.441 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-17 15:19:24.441587 | orchestrator | 15:19:24.441 STDOUT terraform:  + force_destroy = false 2025-09-17 15:19:24.441628 | orchestrator | 15:19:24.441 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.441637 | orchestrator | 15:19:24.441 STDOUT terraform:  + port_id = (known after apply) 2025-09-17 15:19:24.441674 | orchestrator | 15:19:24.441 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.441683 | orchestrator | 15:19:24.441 STDOUT terraform:  + router_id = (known after apply) 2025-09-17 15:19:24.441712 | orchestrator | 15:19:24.441 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-17 15:19:24.441721 | orchestrator | 15:19:24.441 STDOUT terraform:  } 2025-09-17 15:19:24.441751 | orchestrator | 15:19:24.441 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-17 15:19:24.441804 | orchestrator | 15:19:24.441 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-17 15:19:24.441840 | orchestrator | 15:19:24.441 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-17 15:19:24.441929 | orchestrator | 15:19:24.441 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.441937 | orchestrator | 15:19:24.441 STDOUT terraform:  + availability_zone_hints = [ 2025-09-17 15:19:24.441941 | orchestrator | 15:19:24.441 STDOUT terraform:  + "nova", 2025-09-17 15:19:24.441945 | orchestrator | 15:19:24.441 STDOUT terraform:  ] 2025-09-17 15:19:24.442001 | orchestrator | 15:19:24.441 STDOUT terraform:  + distributed = (known after apply) 2025-09-17 15:19:24.442055 | orchestrator | 15:19:24.441 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-17 15:19:24.442090 | orchestrator | 15:19:24.441 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-17 15:19:24.442188 | orchestrator | 15:19:24.442 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-17 15:19:24.442193 | orchestrator | 15:19:24.442 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.442256 | orchestrator | 15:19:24.442 STDOUT terraform:  + name = "testbed" 2025-09-17 15:19:24.442263 | orchestrator | 15:19:24.442 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.442267 | orchestrator | 15:19:24.442 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.442271 | orchestrator | 15:19:24.442 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-17 15:19:24.442283 | orchestrator | 15:19:24.442 STDOUT terraform:  } 2025-09-17 15:19:24.442316 | orchestrator | 15:19:24.442 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-17 15:19:24.442365 | orchestrator | 15:19:24.442 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-17 15:19:24.442396 | orchestrator | 15:19:24.442 STDOUT terraform:  + description = "ssh" 2025-09-17 15:19:24.442419 | orchestrator | 15:19:24.442 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.442435 | orchestrator | 15:19:24.442 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.442472 | orchestrator | 15:19:24.442 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.442486 | orchestrator | 15:19:24.442 STDOUT terraform:  + port_range_max = 22 2025-09-17 15:19:24.442510 | orchestrator | 15:19:24.442 STDOUT terraform:  + port_range_min = 22 2025-09-17 15:19:24.442560 | orchestrator | 15:19:24.442 STDOUT terraform:  + protocol = "tcp" 2025-09-17 15:19:24.442566 | orchestrator | 15:19:24.442 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.442599 | orchestrator | 15:19:24.442 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.442620 | orchestrator | 15:19:24.442 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.442654 | orchestrator | 15:19:24.442 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.442704 | orchestrator | 15:19:24.442 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.442711 | orchestrator | 15:19:24.442 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.442738 | orchestrator | 15:19:24.442 STDOUT terraform:  } 2025-09-17 15:19:24.442782 | orchestrator | 15:19:24.442 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-17 15:19:24.442837 | orchestrator | 15:19:24.442 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-17 15:19:24.442869 | orchestrator | 15:19:24.442 STDOUT terraform:  + description = "wireguard" 2025-09-17 15:19:24.442875 | orchestrator | 15:19:24.442 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.442915 | orchestrator | 15:19:24.442 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.442974 | orchestrator | 15:19:24.442 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.442982 | orchestrator | 15:19:24.442 STDOUT terraform:  + port_range_max = 51820 2025-09-17 15:19:24.442986 | orchestrator | 15:19:24.442 STDOUT terraform:  + port_range_min = 51820 2025-09-17 15:19:24.442991 | orchestrator | 15:19:24.442 STDOUT terraform:  + protocol = "udp" 2025-09-17 15:19:24.443034 | orchestrator | 15:19:24.442 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.443069 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.443100 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.443128 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.443158 | orchestrator | 15:19:24.443 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.443196 | orchestrator | 15:19:24.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.443202 | orchestrator | 15:19:24.443 STDOUT terraform:  } 2025-09-17 15:19:24.443296 | orchestrator | 15:19:24.443 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-17 15:19:24.443308 | orchestrator | 15:19:24.443 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-17 15:19:24.443343 | orchestrator | 15:19:24.443 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.443353 | orchestrator | 15:19:24.443 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.443397 | orchestrator | 15:19:24.443 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.443405 | orchestrator | 15:19:24.443 STDOUT terraform:  + protocol = "tcp" 2025-09-17 15:19:24.443446 | orchestrator | 15:19:24.443 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.443476 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.443510 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.443552 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-17 15:19:24.443586 | orchestrator | 15:19:24.443 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.443623 | orchestrator | 15:19:24.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.443628 | orchestrator | 15:19:24.443 STDOUT terraform:  } 2025-09-17 15:19:24.443673 | orchestrator | 15:19:24.443 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-17 15:19:24.443765 | orchestrator | 15:19:24.443 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-17 15:19:24.443771 | orchestrator | 15:19:24.443 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.443775 | orchestrator | 15:19:24.443 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.443819 | orchestrator | 15:19:24.443 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.443827 | orchestrator | 15:19:24.443 STDOUT terraform:  + protocol = "udp" 2025-09-17 15:19:24.443872 | orchestrator | 15:19:24.443 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.443893 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.443924 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.443988 | orchestrator | 15:19:24.443 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-17 15:19:24.443996 | orchestrator | 15:19:24.443 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.444031 | orchestrator | 15:19:24.443 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.444038 | orchestrator | 15:19:24.444 STDOUT terraform:  } 2025-09-17 15:19:24.444085 | orchestrator | 15:19:24.444 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-17 15:19:24.444165 | orchestrator | 15:19:24.444 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-17 15:19:24.444171 | orchestrator | 15:19:24.444 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.444177 | orchestrator | 15:19:24.444 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.444244 | orchestrator | 15:19:24.444 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.444252 | orchestrator | 15:19:24.444 STDOUT terraform:  + protocol = "icmp" 2025-09-17 15:19:24.444283 | orchestrator | 15:19:24.444 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.444357 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.444363 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.444395 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.444401 | orchestrator | 15:19:24.444 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.444434 | orchestrator | 15:19:24.444 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.444443 | orchestrator | 15:19:24.444 STDOUT terraform:  } 2025-09-17 15:19:24.444500 | orchestrator | 15:19:24.444 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-17 15:19:24.444563 | orchestrator | 15:19:24.444 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-17 15:19:24.444572 | orchestrator | 15:19:24.444 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.444577 | orchestrator | 15:19:24.444 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.444659 | orchestrator | 15:19:24.444 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.444665 | orchestrator | 15:19:24.444 STDOUT terraform:  + protocol = "tcp" 2025-09-17 15:19:24.444673 | orchestrator | 15:19:24.444 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.444711 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.444745 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.444768 | orchestrator | 15:19:24.444 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.444805 | orchestrator | 15:19:24.444 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.444834 | orchestrator | 15:19:24.444 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.444841 | orchestrator | 15:19:24.444 STDOUT terraform:  } 2025-09-17 15:19:24.444917 | orchestrator | 15:19:24.444 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-17 15:19:24.444939 | orchestrator | 15:19:24.444 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-17 15:19:24.444975 | orchestrator | 15:19:24.444 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.444981 | orchestrator | 15:19:24.444 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.445036 | orchestrator | 15:19:24.444 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.445045 | orchestrator | 15:19:24.445 STDOUT terraform:  + protocol = "udp" 2025-09-17 15:19:24.445112 | orchestrator | 15:19:24.445 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.445118 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.445146 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.445177 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.445207 | orchestrator | 15:19:24.445 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.445269 | orchestrator | 15:19:24.445 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.445276 | orchestrator | 15:19:24.445 STDOUT terraform:  } 2025-09-17 15:19:24.445345 | orchestrator | 15:19:24.445 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-17 15:19:24.445373 | orchestrator | 15:19:24.445 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-17 15:19:24.445437 | orchestrator | 15:19:24.445 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.445443 | orchestrator | 15:19:24.445 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.445451 | orchestrator | 15:19:24.445 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.445481 | orchestrator | 15:19:24.445 STDOUT terraform:  + protocol = "icmp" 2025-09-17 15:19:24.445520 | orchestrator | 15:19:24.445 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.445542 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.445605 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.445610 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.445641 | orchestrator | 15:19:24.445 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.445669 | orchestrator | 15:19:24.445 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.445675 | orchestrator | 15:19:24.445 STDOUT terraform:  } 2025-09-17 15:19:24.445729 | orchestrator | 15:19:24.445 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-17 15:19:24.445779 | orchestrator | 15:19:24.445 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-17 15:19:24.445786 | orchestrator | 15:19:24.445 STDOUT terraform:  + description = "vrrp" 2025-09-17 15:19:24.445821 | orchestrator | 15:19:24.445 STDOUT terraform:  + direction = "ingress" 2025-09-17 15:19:24.445837 | orchestrator | 15:19:24.445 STDOUT terraform:  + ethertype = "IPv4" 2025-09-17 15:19:24.445885 | orchestrator | 15:19:24.445 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.445891 | orchestrator | 15:19:24.445 STDOUT terraform:  + protocol = "112" 2025-09-17 15:19:24.445945 | orchestrator | 15:19:24.445 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.445965 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-17 15:19:24.445995 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-17 15:19:24.446046 | orchestrator | 15:19:24.445 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-17 15:19:24.446085 | orchestrator | 15:19:24.446 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-17 15:19:24.446106 | orchestrator | 15:19:24.446 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.446113 | orchestrator | 15:19:24.446 STDOUT terraform:  } 2025-09-17 15:19:24.446167 | orchestrator | 15:19:24.446 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-17 15:19:24.446243 | orchestrator | 15:19:24.446 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-17 15:19:24.446249 | orchestrator | 15:19:24.446 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.446314 | orchestrator | 15:19:24.446 STDOUT terraform:  + description = "management security group" 2025-09-17 15:19:24.446322 | orchestrator | 15:19:24.446 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.446328 | orchestrator | 15:19:24.446 STDOUT terraform:  + name = "testbed-management" 2025-09-17 15:19:24.446373 | orchestrator | 15:19:24.446 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.446380 | orchestrator | 15:19:24.446 STDOUT terraform:  + stateful = (known after apply) 2025-09-17 15:19:24.446406 | orchestrator | 15:19:24.446 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.446413 | orchestrator | 15:19:24.446 STDOUT terraform:  } 2025-09-17 15:19:24.446473 | orchestrator | 15:19:24.446 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-17 15:19:24.446516 | orchestrator | 15:19:24.446 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-17 15:19:24.446524 | orchestrator | 15:19:24.446 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.446569 | orchestrator | 15:19:24.446 STDOUT terraform:  + description = "node security group" 2025-09-17 15:19:24.446574 | orchestrator | 15:19:24.446 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.446593 | orchestrator | 15:19:24.446 STDOUT terraform:  + name = "testbed-node" 2025-09-17 15:19:24.446620 | orchestrator | 15:19:24.446 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.446645 | orchestrator | 15:19:24.446 STDOUT terraform:  + stateful = (known after apply) 2025-09-17 15:19:24.446675 | orchestrator | 15:19:24.446 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.446684 | orchestrator | 15:19:24.446 STDOUT terraform:  } 2025-09-17 15:19:24.446725 | orchestrator | 15:19:24.446 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-17 15:19:24.446776 | orchestrator | 15:19:24.446 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-17 15:19:24.446786 | orchestrator | 15:19:24.446 STDOUT terraform:  + all_tags = (known after apply) 2025-09-17 15:19:24.446824 | orchestrator | 15:19:24.446 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-17 15:19:24.446831 | orchestrator | 15:19:24.446 STDOUT terraform:  + dns_nameservers = [ 2025-09-17 15:19:24.446839 | orchestrator | 15:19:24.446 STDOUT terraform:  + "8.8.8.8", 2025-09-17 15:19:24.446852 | orchestrator | 15:19:24.446 STDOUT terraform:  + "9.9.9.9", 2025-09-17 15:19:24.446870 | orchestrator | 15:19:24.446 STDOUT terraform:  ] 2025-09-17 15:19:24.446916 | orchestrator | 15:19:24.446 STDOUT terraform:  + enable_dhcp = true 2025-09-17 15:19:24.446921 | orchestrator | 15:19:24.446 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-17 15:19:24.446949 | orchestrator | 15:19:24.446 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.446955 | orchestrator | 15:19:24.446 STDOUT terraform:  + ip_version = 4 2025-09-17 15:19:24.446988 | orchestrator | 15:19:24.446 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-17 15:19:24.447022 | orchestrator | 15:19:24.446 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-17 15:19:24.447049 | orchestrator | 15:19:24.447 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-17 15:19:24.447108 | orchestrator | 15:19:24.447 STDOUT terraform:  + network_id = (known after apply) 2025-09-17 15:19:24.447116 | orchestrator | 15:19:24.447 STDOUT terraform:  + no_gateway = false 2025-09-17 15:19:24.447121 | orchestrator | 15:19:24.447 STDOUT terraform:  + region = (known after apply) 2025-09-17 15:19:24.447165 | orchestrator | 15:19:24.447 STDOUT terraform:  + service_types = (known after apply) 2025-09-17 15:19:24.447170 | orchestrator | 15:19:24.447 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-17 15:19:24.447189 | orchestrator | 15:19:24.447 STDOUT terraform:  + allocation_pool { 2025-09-17 15:19:24.447211 | orchestrator | 15:19:24.447 STDOUT terraform:  + end = "192.168.31.250" 2025-09-17 15:19:24.447297 | orchestrator | 15:19:24.447 STDOUT terraform:  + start = "192.168.31.200" 2025-09-17 15:19:24.447303 | orchestrator | 15:19:24.447 STDOUT terraform:  } 2025-09-17 15:19:24.447307 | orchestrator | 15:19:24.447 STDOUT terraform:  } 2025-09-17 15:19:24.447314 | orchestrator | 15:19:24.447 STDOUT terraform:  # terraform_data.image will be created 2025-09-17 15:19:24.447319 | orchestrator | 15:19:24.447 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-17 15:19:24.447326 | orchestrator | 15:19:24.447 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.447354 | orchestrator | 15:19:24.447 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-17 15:19:24.447366 | orchestrator | 15:19:24.447 STDOUT terraform:  + output = (known after apply) 2025-09-17 15:19:24.447373 | orchestrator | 15:19:24.447 STDOUT terraform:  } 2025-09-17 15:19:24.447410 | orchestrator | 15:19:24.447 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-17 15:19:24.447422 | orchestrator | 15:19:24.447 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-17 15:19:24.447484 | orchestrator | 15:19:24.447 STDOUT terraform:  + id = (known after apply) 2025-09-17 15:19:24.447492 | orchestrator | 15:19:24.447 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-17 15:19:24.447499 | orchestrator | 15:19:24.447 STDOUT terraform:  + output = (known after apply) 2025-09-17 15:19:24.447506 | orchestrator | 15:19:24.447 STDOUT terraform:  } 2025-09-17 15:19:24.447514 | orchestrator | 15:19:24.447 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-17 15:19:24.447530 | orchestrator | 15:19:24.447 STDOUT terraform: Changes to Outputs: 2025-09-17 15:19:24.447539 | orchestrator | 15:19:24.447 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-17 15:19:24.447564 | orchestrator | 15:19:24.447 STDOUT terraform:  + private_key = (sensitive value) 2025-09-17 15:19:24.663927 | orchestrator | 15:19:24.663 STDOUT terraform: terraform_data.image: Creating... 2025-09-17 15:19:24.663986 | orchestrator | 15:19:24.663 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-17 15:19:24.663993 | orchestrator | 15:19:24.663 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=c98715a8-14d4-6474-29c9-b7ba899c03cf] 2025-09-17 15:19:24.664000 | orchestrator | 15:19:24.663 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=9d287f33-e5a9-5bd5-ec3e-4c4a041b4606] 2025-09-17 15:19:24.693997 | orchestrator | 15:19:24.693 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-17 15:19:24.694937 | orchestrator | 15:19:24.694 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-17 15:19:24.697000 | orchestrator | 15:19:24.696 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-17 15:19:24.702193 | orchestrator | 15:19:24.698 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-17 15:19:24.704685 | orchestrator | 15:19:24.704 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-17 15:19:24.717139 | orchestrator | 15:19:24.714 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-17 15:19:24.717613 | orchestrator | 15:19:24.715 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-17 15:19:24.717732 | orchestrator | 15:19:24.715 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-17 15:19:24.717742 | orchestrator | 15:19:24.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-17 15:19:24.721755 | orchestrator | 15:19:24.720 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-17 15:19:25.181894 | orchestrator | 15:19:25.181 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-17 15:19:25.188096 | orchestrator | 15:19:25.187 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-17 15:19:25.255239 | orchestrator | 15:19:25.254 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-17 15:19:25.261389 | orchestrator | 15:19:25.261 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-17 15:19:25.907889 | orchestrator | 15:19:25.907 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=22346f57-9d13-4ce3-9eb6-ec45a496a356] 2025-09-17 15:19:25.910433 | orchestrator | 15:19:25.910 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-17 15:19:25.965174 | orchestrator | 15:19:25.964 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-17 15:19:25.968929 | orchestrator | 15:19:25.968 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-17 15:19:28.369748 | orchestrator | 15:19:28.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=abcb278f-9464-4e60-af45-8a9c7109c560] 2025-09-17 15:19:28.373335 | orchestrator | 15:19:28.373 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-17 15:19:28.383442 | orchestrator | 15:19:28.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=11507ddf-c78f-4c5a-8643-6bad2a8b39ae] 2025-09-17 15:19:28.388337 | orchestrator | 15:19:28.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-17 15:19:28.408647 | orchestrator | 15:19:28.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=ff1b16a2-3e0a-432a-b441-b3fe8b453f6d] 2025-09-17 15:19:28.409105 | orchestrator | 15:19:28.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=2c018cbd-00e9-4926-8b68-5b46915e5cd3] 2025-09-17 15:19:28.424858 | orchestrator | 15:19:28.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-17 15:19:28.434089 | orchestrator | 15:19:28.429 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-17 15:19:28.434137 | orchestrator | 15:19:28.431 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=4c46eecc-90d4-4da2-9e84-51f99bffdbae] 2025-09-17 15:19:28.438087 | orchestrator | 15:19:28.435 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-17 15:19:28.451588 | orchestrator | 15:19:28.450 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=389a752f-d381-48a7-a4b5-e7f86559b7a2] 2025-09-17 15:19:28.451770 | orchestrator | 15:19:28.451 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=81270acf-a9ce-49fe-b935-471dffd13372] 2025-09-17 15:19:28.462085 | orchestrator | 15:19:28.460 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-17 15:19:28.468762 | orchestrator | 15:19:28.466 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-17 15:19:28.469617 | orchestrator | 15:19:28.469 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=dfa5a64f8b8e3816ca55b7e194d26586e1ac10c0] 2025-09-17 15:19:28.473506 | orchestrator | 15:19:28.473 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-17 15:19:28.480817 | orchestrator | 15:19:28.480 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=7b92d5cef8a537c80c5a1d6a45e0b8599d69ffaf] 2025-09-17 15:19:28.484504 | orchestrator | 15:19:28.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=2998f2ff-923f-4644-b235-1d192431ff16] 2025-09-17 15:19:28.485323 | orchestrator | 15:19:28.485 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-17 15:19:28.508788 | orchestrator | 15:19:28.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=5c4f947a-1fa9-4d40-922c-b00760e10f53] 2025-09-17 15:19:29.339458 | orchestrator | 15:19:29.339 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=5198831b-ccf7-4bda-9ab5-c4d193685229] 2025-09-17 15:19:29.809468 | orchestrator | 15:19:29.809 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=803f14e9-13be-4623-974e-6f1a724ab23f] 2025-09-17 15:19:29.823869 | orchestrator | 15:19:29.823 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-17 15:19:31.770978 | orchestrator | 15:19:31.770 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=ba0d02f0-2b9a-4a66-9287-74da8ed1487d] 2025-09-17 15:19:31.787444 | orchestrator | 15:19:31.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=c923d257-7213-4d28-88ba-25ec4a127767] 2025-09-17 15:19:31.833784 | orchestrator | 15:19:31.833 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=0769a619-0eb8-44c9-9217-96fd06089110] 2025-09-17 15:19:31.857074 | orchestrator | 15:19:31.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=e7e177ac-52b2-47f4-af69-5c6cfcbe9873] 2025-09-17 15:19:31.880165 | orchestrator | 15:19:31.879 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f03ffbc3-55bd-4a36-bbb9-e17837daa0e3] 2025-09-17 15:19:31.971919 | orchestrator | 15:19:31.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=7cfeb536-2f17-4290-8db9-eae7b72314bf] 2025-09-17 15:19:33.164862 | orchestrator | 15:19:33.164 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=40ea4ba0-df77-417b-bca2-633d3948a1a7] 2025-09-17 15:19:33.172789 | orchestrator | 15:19:33.172 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-17 15:19:33.172872 | orchestrator | 15:19:33.172 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-17 15:19:33.174236 | orchestrator | 15:19:33.174 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-17 15:19:33.392540 | orchestrator | 15:19:33.388 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ae24dbdf-0d16-4e2b-91e1-73182f81c825] 2025-09-17 15:19:33.410359 | orchestrator | 15:19:33.410 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-17 15:19:33.411354 | orchestrator | 15:19:33.411 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-17 15:19:33.411577 | orchestrator | 15:19:33.411 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-17 15:19:33.411954 | orchestrator | 15:19:33.411 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-17 15:19:33.414798 | orchestrator | 15:19:33.414 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-17 15:19:33.416378 | orchestrator | 15:19:33.416 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-17 15:19:33.417075 | orchestrator | 15:19:33.416 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-17 15:19:33.417553 | orchestrator | 15:19:33.417 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-17 15:19:33.437755 | orchestrator | 15:19:33.437 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f177fc4b-005c-4262-897d-7b938aa3dbc9] 2025-09-17 15:19:33.444453 | orchestrator | 15:19:33.444 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-17 15:19:33.575808 | orchestrator | 15:19:33.575 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=7b4384c6-71c9-4e5d-8b01-aa9afa85d32d] 2025-09-17 15:19:33.592882 | orchestrator | 15:19:33.592 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-17 15:19:33.814323 | orchestrator | 15:19:33.813 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=1f63afb3-f31d-4b1c-b3f6-5782ea1ffc82] 2025-09-17 15:19:33.821753 | orchestrator | 15:19:33.821 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-17 15:19:34.030912 | orchestrator | 15:19:34.030 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9061eb76-b24e-4d04-8b2d-98b65b0c4ebf] 2025-09-17 15:19:34.037912 | orchestrator | 15:19:34.037 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-17 15:19:34.144705 | orchestrator | 15:19:34.144 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=76c70a21-f7ad-4206-830d-0bf974997a6b] 2025-09-17 15:19:34.151457 | orchestrator | 15:19:34.151 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-17 15:19:34.177741 | orchestrator | 15:19:34.177 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=79506143-bacf-4bfa-9712-50fe652bc475] 2025-09-17 15:19:34.183487 | orchestrator | 15:19:34.183 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-17 15:19:34.190564 | orchestrator | 15:19:34.190 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a4bcc1cb-23fa-4882-a494-d6a2cf628340] 2025-09-17 15:19:34.193727 | orchestrator | 15:19:34.193 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=cc732842-7101-4621-b405-c124ca773b3a] 2025-09-17 15:19:34.199020 | orchestrator | 15:19:34.198 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-17 15:19:34.206273 | orchestrator | 15:19:34.206 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-17 15:19:34.231839 | orchestrator | 15:19:34.231 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b6fbf860-0639-4376-a03f-e543988c1c4f] 2025-09-17 15:19:34.242078 | orchestrator | 15:19:34.241 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=14623353-a3fa-419a-af86-bf80f2989f3a] 2025-09-17 15:19:34.370856 | orchestrator | 15:19:34.370 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=1e9dea3a-51cc-4526-93d3-5f916798e220] 2025-09-17 15:19:34.434653 | orchestrator | 15:19:34.434 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=cd565ca7-a40f-47c5-9b1e-2bc6651e1362] 2025-09-17 15:19:34.441883 | orchestrator | 15:19:34.441 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=13afbc81-da52-4c80-a43f-c80a15d76375] 2025-09-17 15:19:34.601479 | orchestrator | 15:19:34.601 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=e69c8835-9829-4447-90c0-3ec1edec49ad] 2025-09-17 15:19:34.738750 | orchestrator | 15:19:34.738 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=cde2ea7e-fe52-4910-90e8-b36d87f422ac] 2025-09-17 15:19:34.751480 | orchestrator | 15:19:34.751 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=35bd2528-809a-4bf1-a833-d3be102b1cc1] 2025-09-17 15:19:34.937805 | orchestrator | 15:19:34.937 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=216333d0-cb67-4625-b953-fbc9d20924ce] 2025-09-17 15:19:37.344954 | orchestrator | 15:19:37.344 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=4a1d3a26-5060-4bdf-b515-85290ef88f5a] 2025-09-17 15:19:37.360095 | orchestrator | 15:19:37.359 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-17 15:19:37.388856 | orchestrator | 15:19:37.388 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-17 15:19:37.388924 | orchestrator | 15:19:37.388 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-17 15:19:37.388930 | orchestrator | 15:19:37.388 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-17 15:19:37.393761 | orchestrator | 15:19:37.393 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-17 15:19:37.398138 | orchestrator | 15:19:37.398 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-17 15:19:37.403210 | orchestrator | 15:19:37.403 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-17 15:19:38.661037 | orchestrator | 15:19:38.660 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=c125e61a-86c2-4f06-88f1-c8479fa19e3a] 2025-09-17 15:19:38.679211 | orchestrator | 15:19:38.679 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-17 15:19:38.679319 | orchestrator | 15:19:38.679 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-17 15:19:38.684932 | orchestrator | 15:19:38.684 STDOUT terraform: local_file.inventory: Creating... 2025-09-17 15:19:38.686464 | orchestrator | 15:19:38.686 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=17d1a1607290eee758ea92d560feb33ec96ead31] 2025-09-17 15:19:38.690044 | orchestrator | 15:19:38.689 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=bac132a5e6e11eb2ec709cd02122b160d1ed139c] 2025-09-17 15:19:40.003387 | orchestrator | 15:19:40.002 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c125e61a-86c2-4f06-88f1-c8479fa19e3a] 2025-09-17 15:19:47.394140 | orchestrator | 15:19:47.393 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-17 15:19:47.396101 | orchestrator | 15:19:47.395 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-17 15:19:47.396314 | orchestrator | 15:19:47.396 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-17 15:19:47.399459 | orchestrator | 15:19:47.399 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-17 15:19:47.399783 | orchestrator | 15:19:47.399 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-17 15:19:47.404759 | orchestrator | 15:19:47.404 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-17 15:19:57.395176 | orchestrator | 15:19:57.394 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-17 15:19:57.396161 | orchestrator | 15:19:57.396 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-17 15:19:57.397302 | orchestrator | 15:19:57.397 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-17 15:19:57.400252 | orchestrator | 15:19:57.400 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-17 15:19:57.400442 | orchestrator | 15:19:57.400 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-17 15:19:57.405842 | orchestrator | 15:19:57.405 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-17 15:19:57.952316 | orchestrator | 15:19:57.951 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=000e0057-6de1-4708-af58-b1dea2e4e1a3] 2025-09-17 15:19:58.041512 | orchestrator | 15:19:58.041 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=a18c6aaa-3393-4e44-80a7-4ffcf56e1918] 2025-09-17 15:19:58.063469 | orchestrator | 15:19:58.063 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=f58f5965-c7df-43c6-8dbc-943bc8f95415] 2025-09-17 15:20:07.399737 | orchestrator | 15:20:07.399 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-17 15:20:07.401894 | orchestrator | 15:20:07.401 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-17 15:20:07.406108 | orchestrator | 15:20:07.405 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-17 15:20:08.006705 | orchestrator | 15:20:08.006 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=57eef797-02d6-45de-9f67-afe033813298] 2025-09-17 15:20:08.333993 | orchestrator | 15:20:08.333 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2fd5b831-c25f-4313-b146-2e3b2dd4a412] 2025-09-17 15:20:09.024658 | orchestrator | 15:20:09.024 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=d015becb-0cc2-4774-a4ab-0c3af3ea83fe] 2025-09-17 15:20:09.039603 | orchestrator | 15:20:09.039 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-17 15:20:09.055121 | orchestrator | 15:20:09.054 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-17 15:20:09.065914 | orchestrator | 15:20:09.065 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5090307861620168531] 2025-09-17 15:20:09.066568 | orchestrator | 15:20:09.066 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-17 15:20:09.066599 | orchestrator | 15:20:09.066 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-17 15:20:09.067942 | orchestrator | 15:20:09.067 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-17 15:20:09.068970 | orchestrator | 15:20:09.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-17 15:20:09.068992 | orchestrator | 15:20:09.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-17 15:20:09.069000 | orchestrator | 15:20:09.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-17 15:20:09.075799 | orchestrator | 15:20:09.075 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-17 15:20:09.087819 | orchestrator | 15:20:09.087 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-17 15:20:09.103753 | orchestrator | 15:20:09.103 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-17 15:20:12.480050 | orchestrator | 15:20:12.479 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=57eef797-02d6-45de-9f67-afe033813298/4c46eecc-90d4-4da2-9e84-51f99bffdbae] 2025-09-17 15:20:12.503566 | orchestrator | 15:20:12.503 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=f58f5965-c7df-43c6-8dbc-943bc8f95415/ff1b16a2-3e0a-432a-b441-b3fe8b453f6d] 2025-09-17 15:20:12.518453 | orchestrator | 15:20:12.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=a18c6aaa-3393-4e44-80a7-4ffcf56e1918/81270acf-a9ce-49fe-b935-471dffd13372] 2025-09-17 15:20:12.560309 | orchestrator | 15:20:12.559 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=57eef797-02d6-45de-9f67-afe033813298/2c018cbd-00e9-4926-8b68-5b46915e5cd3] 2025-09-17 15:20:12.565609 | orchestrator | 15:20:12.565 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=f58f5965-c7df-43c6-8dbc-943bc8f95415/11507ddf-c78f-4c5a-8643-6bad2a8b39ae] 2025-09-17 15:20:12.632369 | orchestrator | 15:20:12.631 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=a18c6aaa-3393-4e44-80a7-4ffcf56e1918/5c4f947a-1fa9-4d40-922c-b00760e10f53] 2025-09-17 15:20:18.648991 | orchestrator | 15:20:18.648 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=57eef797-02d6-45de-9f67-afe033813298/2998f2ff-923f-4644-b235-1d192431ff16] 2025-09-17 15:20:18.674499 | orchestrator | 15:20:18.673 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=f58f5965-c7df-43c6-8dbc-943bc8f95415/389a752f-d381-48a7-a4b5-e7f86559b7a2] 2025-09-17 15:20:18.701142 | orchestrator | 15:20:18.700 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=a18c6aaa-3393-4e44-80a7-4ffcf56e1918/abcb278f-9464-4e60-af45-8a9c7109c560] 2025-09-17 15:20:19.101819 | orchestrator | 15:20:19.101 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-17 15:20:29.102377 | orchestrator | 15:20:29.101 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-17 15:20:29.626583 | orchestrator | 15:20:29.626 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=ef71dd60-3d93-439a-be47-893b9560d1bf] 2025-09-17 15:20:29.656969 | orchestrator | 15:20:29.656 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-17 15:20:29.657064 | orchestrator | 15:20:29.656 STDOUT terraform: Outputs: 2025-09-17 15:20:29.657081 | orchestrator | 15:20:29.656 STDOUT terraform: manager_address = 2025-09-17 15:20:29.657110 | orchestrator | 15:20:29.656 STDOUT terraform: private_key = 2025-09-17 15:20:29.745008 | orchestrator | ok: Runtime: 0:01:10.332158 2025-09-17 15:20:29.774049 | 2025-09-17 15:20:29.774169 | TASK [Fetch manager address] 2025-09-17 15:20:30.221446 | orchestrator | ok 2025-09-17 15:20:30.230542 | 2025-09-17 15:20:30.230654 | TASK [Set manager_host address] 2025-09-17 15:20:30.308840 | orchestrator | ok 2025-09-17 15:20:30.317690 | 2025-09-17 15:20:30.317866 | LOOP [Update ansible collections] 2025-09-17 15:20:31.203966 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 15:20:31.204380 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-17 15:20:31.204435 | orchestrator | Starting galaxy collection install process 2025-09-17 15:20:31.204471 | orchestrator | Process install dependency map 2025-09-17 15:20:31.204503 | orchestrator | Starting collection install process 2025-09-17 15:20:31.204531 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-09-17 15:20:31.204561 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-09-17 15:20:31.204595 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-17 15:20:31.204657 | orchestrator | ok: Item: commons Runtime: 0:00:00.566642 2025-09-17 15:20:32.140279 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 15:20:32.140533 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-17 15:20:32.140628 | orchestrator | Starting galaxy collection install process 2025-09-17 15:20:32.140696 | orchestrator | Process install dependency map 2025-09-17 15:20:32.140760 | orchestrator | Starting collection install process 2025-09-17 15:20:32.140818 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-09-17 15:20:32.140877 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-09-17 15:20:32.140958 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-17 15:20:32.141093 | orchestrator | ok: Item: services Runtime: 0:00:00.659417 2025-09-17 15:20:32.160910 | 2025-09-17 15:20:32.161071 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-17 15:20:42.708816 | orchestrator | ok 2025-09-17 15:20:42.719896 | 2025-09-17 15:20:42.720062 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-17 15:21:42.768607 | orchestrator | ok 2025-09-17 15:21:42.778793 | 2025-09-17 15:21:42.778976 | TASK [Fetch manager ssh hostkey] 2025-09-17 15:21:44.356378 | orchestrator | Output suppressed because no_log was given 2025-09-17 15:21:44.374444 | 2025-09-17 15:21:44.374607 | TASK [Get ssh keypair from terraform environment] 2025-09-17 15:21:44.910697 | orchestrator | ok: Runtime: 0:00:00.010944 2025-09-17 15:21:44.925856 | 2025-09-17 15:21:44.926034 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-17 15:21:44.962549 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-17 15:21:44.973056 | 2025-09-17 15:21:44.973184 | TASK [Run manager part 0] 2025-09-17 15:21:45.881552 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 15:21:45.927019 | orchestrator | 2025-09-17 15:21:45.927067 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-17 15:21:45.927073 | orchestrator | 2025-09-17 15:21:45.927087 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-17 15:21:47.658638 | orchestrator | ok: [testbed-manager] 2025-09-17 15:21:47.658685 | orchestrator | 2025-09-17 15:21:47.658721 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-17 15:21:47.658730 | orchestrator | 2025-09-17 15:21:47.658739 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:21:49.488905 | orchestrator | ok: [testbed-manager] 2025-09-17 15:21:49.488956 | orchestrator | 2025-09-17 15:21:49.488964 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-17 15:21:50.119751 | orchestrator | ok: [testbed-manager] 2025-09-17 15:21:50.119796 | orchestrator | 2025-09-17 15:21:50.119803 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-17 15:21:50.167502 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.167551 | orchestrator | 2025-09-17 15:21:50.167561 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-17 15:21:50.200329 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.200374 | orchestrator | 2025-09-17 15:21:50.200381 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-17 15:21:50.235566 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.235640 | orchestrator | 2025-09-17 15:21:50.235653 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-17 15:21:50.272562 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.272615 | orchestrator | 2025-09-17 15:21:50.272623 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-17 15:21:50.307474 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.307556 | orchestrator | 2025-09-17 15:21:50.307575 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-17 15:21:50.346163 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.346208 | orchestrator | 2025-09-17 15:21:50.346220 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-17 15:21:50.382110 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:21:50.382164 | orchestrator | 2025-09-17 15:21:50.382172 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-17 15:21:51.097193 | orchestrator | changed: [testbed-manager] 2025-09-17 15:21:51.097265 | orchestrator | 2025-09-17 15:21:51.097273 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-17 15:24:09.490523 | orchestrator | changed: [testbed-manager] 2025-09-17 15:24:09.490597 | orchestrator | 2025-09-17 15:24:09.490615 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-17 15:25:24.299708 | orchestrator | changed: [testbed-manager] 2025-09-17 15:25:24.299812 | orchestrator | 2025-09-17 15:25:24.299830 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-17 15:25:43.654812 | orchestrator | changed: [testbed-manager] 2025-09-17 15:25:43.654903 | orchestrator | 2025-09-17 15:25:43.654923 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-17 15:25:52.192880 | orchestrator | changed: [testbed-manager] 2025-09-17 15:25:52.192972 | orchestrator | 2025-09-17 15:25:52.192989 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-17 15:25:52.239792 | orchestrator | ok: [testbed-manager] 2025-09-17 15:25:52.239959 | orchestrator | 2025-09-17 15:25:52.239977 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-17 15:25:52.992146 | orchestrator | ok: [testbed-manager] 2025-09-17 15:25:52.992192 | orchestrator | 2025-09-17 15:25:52.992200 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-17 15:25:53.668490 | orchestrator | changed: [testbed-manager] 2025-09-17 15:25:53.668580 | orchestrator | 2025-09-17 15:25:53.668598 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-17 15:25:59.995554 | orchestrator | changed: [testbed-manager] 2025-09-17 15:25:59.995645 | orchestrator | 2025-09-17 15:25:59.995683 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-17 15:26:05.859623 | orchestrator | changed: [testbed-manager] 2025-09-17 15:26:05.859717 | orchestrator | 2025-09-17 15:26:05.859736 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-17 15:26:08.478981 | orchestrator | changed: [testbed-manager] 2025-09-17 15:26:08.479633 | orchestrator | 2025-09-17 15:26:08.479653 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-17 15:26:10.200179 | orchestrator | changed: [testbed-manager] 2025-09-17 15:26:10.200303 | orchestrator | 2025-09-17 15:26:10.200321 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-17 15:26:11.233637 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-17 15:26:11.233700 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-17 15:26:11.233713 | orchestrator | 2025-09-17 15:26:11.233725 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-17 15:26:11.271563 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-17 15:26:11.271588 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-17 15:26:11.271600 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-17 15:26:11.271612 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-17 15:26:14.338973 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-17 15:26:14.339031 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-17 15:26:14.339036 | orchestrator | 2025-09-17 15:26:14.339041 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-17 15:26:14.891221 | orchestrator | changed: [testbed-manager] 2025-09-17 15:26:14.891335 | orchestrator | 2025-09-17 15:26:14.891352 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-17 15:29:46.933322 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-17 15:29:46.933418 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-17 15:29:46.933434 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-17 15:29:46.933445 | orchestrator | 2025-09-17 15:29:46.933456 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-17 15:29:49.164349 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-17 15:29:49.164383 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-17 15:29:49.164388 | orchestrator | 2025-09-17 15:29:49.164394 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-17 15:29:49.164398 | orchestrator | 2025-09-17 15:29:49.164402 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:29:50.524932 | orchestrator | ok: [testbed-manager] 2025-09-17 15:29:50.524965 | orchestrator | 2025-09-17 15:29:50.524973 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-17 15:29:50.570706 | orchestrator | ok: [testbed-manager] 2025-09-17 15:29:50.570743 | orchestrator | 2025-09-17 15:29:50.570751 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-17 15:29:50.636386 | orchestrator | ok: [testbed-manager] 2025-09-17 15:29:50.636422 | orchestrator | 2025-09-17 15:29:50.636430 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-17 15:29:51.377362 | orchestrator | changed: [testbed-manager] 2025-09-17 15:29:51.377442 | orchestrator | 2025-09-17 15:29:51.377458 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-17 15:29:52.137150 | orchestrator | changed: [testbed-manager] 2025-09-17 15:29:52.137233 | orchestrator | 2025-09-17 15:29:52.137250 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-17 15:29:53.494500 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-17 15:29:53.494596 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-17 15:29:53.494613 | orchestrator | 2025-09-17 15:29:53.494641 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-17 15:29:54.841471 | orchestrator | changed: [testbed-manager] 2025-09-17 15:29:54.841633 | orchestrator | 2025-09-17 15:29:54.841647 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-17 15:29:56.536261 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:29:56.536368 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-17 15:29:56.536382 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:29:56.536394 | orchestrator | 2025-09-17 15:29:56.536407 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-17 15:29:56.592384 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:56.592433 | orchestrator | 2025-09-17 15:29:56.592440 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-17 15:29:57.146705 | orchestrator | changed: [testbed-manager] 2025-09-17 15:29:57.146905 | orchestrator | 2025-09-17 15:29:57.146940 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-17 15:29:57.213925 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:57.213972 | orchestrator | 2025-09-17 15:29:57.213978 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-17 15:29:57.999479 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:29:57.999561 | orchestrator | changed: [testbed-manager] 2025-09-17 15:29:57.999577 | orchestrator | 2025-09-17 15:29:57.999589 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-17 15:29:58.037214 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:58.037274 | orchestrator | 2025-09-17 15:29:58.037285 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-17 15:29:58.071129 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:58.071192 | orchestrator | 2025-09-17 15:29:58.071206 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-17 15:29:58.101578 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:58.101642 | orchestrator | 2025-09-17 15:29:58.101659 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-17 15:29:58.149680 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:29:58.149745 | orchestrator | 2025-09-17 15:29:58.149761 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-17 15:29:58.821018 | orchestrator | ok: [testbed-manager] 2025-09-17 15:29:58.821132 | orchestrator | 2025-09-17 15:29:58.821149 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-17 15:29:58.821162 | orchestrator | 2025-09-17 15:29:58.821173 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:30:00.194191 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:00.194264 | orchestrator | 2025-09-17 15:30:00.194280 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-17 15:30:01.137081 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:01.137200 | orchestrator | 2025-09-17 15:30:01.137216 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:30:01.137229 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-17 15:30:01.137240 | orchestrator | 2025-09-17 15:30:01.295442 | orchestrator | ok: Runtime: 0:08:15.946610 2025-09-17 15:30:01.306308 | 2025-09-17 15:30:01.306417 | TASK [Point out that the log in on the manager is now possible] 2025-09-17 15:30:01.344156 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-17 15:30:01.354358 | 2025-09-17 15:30:01.354473 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-17 15:30:01.388382 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-17 15:30:01.396504 | 2025-09-17 15:30:01.396617 | TASK [Run manager part 1 + 2] 2025-09-17 15:30:02.187816 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-17 15:30:02.238172 | orchestrator | 2025-09-17 15:30:02.238233 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-17 15:30:02.238245 | orchestrator | 2025-09-17 15:30:02.238266 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:30:05.088361 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.088513 | orchestrator | 2025-09-17 15:30:05.088568 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-17 15:30:05.124664 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:30:05.124721 | orchestrator | 2025-09-17 15:30:05.124732 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-17 15:30:05.164415 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.164471 | orchestrator | 2025-09-17 15:30:05.164481 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 15:30:05.196747 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.196796 | orchestrator | 2025-09-17 15:30:05.196804 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 15:30:05.256328 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.256380 | orchestrator | 2025-09-17 15:30:05.256387 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 15:30:05.316799 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.316860 | orchestrator | 2025-09-17 15:30:05.316872 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 15:30:05.361112 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-17 15:30:05.361168 | orchestrator | 2025-09-17 15:30:05.361178 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 15:30:05.981106 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:05.981182 | orchestrator | 2025-09-17 15:30:05.981201 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 15:30:06.026113 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:30:06.026152 | orchestrator | 2025-09-17 15:30:06.026159 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 15:30:07.208517 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:07.208582 | orchestrator | 2025-09-17 15:30:07.208597 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 15:30:07.726521 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:07.726588 | orchestrator | 2025-09-17 15:30:07.726602 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 15:30:08.755169 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:08.755236 | orchestrator | 2025-09-17 15:30:08.755251 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 15:30:24.246180 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:24.246269 | orchestrator | 2025-09-17 15:30:24.246286 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-17 15:30:24.864611 | orchestrator | ok: [testbed-manager] 2025-09-17 15:30:24.864658 | orchestrator | 2025-09-17 15:30:24.864669 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-17 15:30:24.936374 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:30:24.936452 | orchestrator | 2025-09-17 15:30:24.936468 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-17 15:30:25.847470 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:25.847522 | orchestrator | 2025-09-17 15:30:25.847530 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-17 15:30:26.731247 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:26.731338 | orchestrator | 2025-09-17 15:30:26.731354 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-17 15:30:27.254327 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:27.254377 | orchestrator | 2025-09-17 15:30:27.254392 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-17 15:30:27.290697 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-17 15:30:27.290771 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-17 15:30:27.290785 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-17 15:30:27.290797 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-17 15:30:29.060389 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:29.060474 | orchestrator | 2025-09-17 15:30:29.060490 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-17 15:30:37.853452 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-17 15:30:37.853587 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-17 15:30:37.853604 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-17 15:30:37.853616 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-17 15:30:37.853632 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-17 15:30:37.853642 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-17 15:30:37.853652 | orchestrator | 2025-09-17 15:30:37.853663 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-17 15:30:38.874743 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:38.874886 | orchestrator | 2025-09-17 15:30:38.874907 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-17 15:30:38.914613 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:30:38.914697 | orchestrator | 2025-09-17 15:30:38.914713 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-17 15:30:41.890978 | orchestrator | changed: [testbed-manager] 2025-09-17 15:30:41.891068 | orchestrator | 2025-09-17 15:30:41.891085 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-17 15:30:41.935252 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:30:41.935343 | orchestrator | 2025-09-17 15:30:41.935359 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-17 15:32:20.231722 | orchestrator | changed: [testbed-manager] 2025-09-17 15:32:20.231852 | orchestrator | 2025-09-17 15:32:20.231873 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 15:32:21.455759 | orchestrator | ok: [testbed-manager] 2025-09-17 15:32:21.455802 | orchestrator | 2025-09-17 15:32:21.455812 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:32:21.455820 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-17 15:32:21.455828 | orchestrator | 2025-09-17 15:32:22.013652 | orchestrator | ok: Runtime: 0:02:19.864102 2025-09-17 15:32:22.022605 | 2025-09-17 15:32:22.022725 | TASK [Reboot manager] 2025-09-17 15:32:23.556112 | orchestrator | ok: Runtime: 0:00:01.010288 2025-09-17 15:32:23.569003 | 2025-09-17 15:32:23.569169 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-17 15:32:37.894240 | orchestrator | ok 2025-09-17 15:32:37.906585 | 2025-09-17 15:32:37.906733 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-17 15:33:37.952258 | orchestrator | ok 2025-09-17 15:33:37.961420 | 2025-09-17 15:33:37.961544 | TASK [Deploy manager + bootstrap nodes] 2025-09-17 15:33:40.480313 | orchestrator | 2025-09-17 15:33:40.480558 | orchestrator | # DEPLOY MANAGER 2025-09-17 15:33:40.480584 | orchestrator | 2025-09-17 15:33:40.480599 | orchestrator | + set -e 2025-09-17 15:33:40.480613 | orchestrator | + echo 2025-09-17 15:33:40.480627 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-17 15:33:40.480644 | orchestrator | + echo 2025-09-17 15:33:40.480697 | orchestrator | + cat /opt/manager-vars.sh 2025-09-17 15:33:40.483719 | orchestrator | export NUMBER_OF_NODES=6 2025-09-17 15:33:40.483750 | orchestrator | 2025-09-17 15:33:40.483764 | orchestrator | export CEPH_VERSION=reef 2025-09-17 15:33:40.483776 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-17 15:33:40.483789 | orchestrator | export MANAGER_VERSION=9.2.0 2025-09-17 15:33:40.483811 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-17 15:33:40.483822 | orchestrator | 2025-09-17 15:33:40.483840 | orchestrator | export ARA=false 2025-09-17 15:33:40.483851 | orchestrator | export DEPLOY_MODE=manager 2025-09-17 15:33:40.483869 | orchestrator | export TEMPEST=false 2025-09-17 15:33:40.483880 | orchestrator | export IS_ZUUL=true 2025-09-17 15:33:40.483891 | orchestrator | 2025-09-17 15:33:40.483909 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:33:40.483920 | orchestrator | export EXTERNAL_API=false 2025-09-17 15:33:40.483930 | orchestrator | 2025-09-17 15:33:40.483941 | orchestrator | export IMAGE_USER=ubuntu 2025-09-17 15:33:40.483955 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-17 15:33:40.483966 | orchestrator | 2025-09-17 15:33:40.483976 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-17 15:33:40.483993 | orchestrator | 2025-09-17 15:33:40.484004 | orchestrator | + echo 2025-09-17 15:33:40.484016 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 15:33:40.484547 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 15:33:40.484563 | orchestrator | ++ INTERACTIVE=false 2025-09-17 15:33:40.484575 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 15:33:40.484587 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 15:33:40.484811 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 15:33:40.484826 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 15:33:40.484838 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 15:33:40.484848 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 15:33:40.484859 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 15:33:40.484938 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 15:33:40.484954 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 15:33:40.484965 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 15:33:40.484975 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 15:33:40.484986 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 15:33:40.485007 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 15:33:40.485018 | orchestrator | ++ export ARA=false 2025-09-17 15:33:40.485029 | orchestrator | ++ ARA=false 2025-09-17 15:33:40.485040 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 15:33:40.485051 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 15:33:40.485061 | orchestrator | ++ export TEMPEST=false 2025-09-17 15:33:40.485072 | orchestrator | ++ TEMPEST=false 2025-09-17 15:33:40.485083 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 15:33:40.485094 | orchestrator | ++ IS_ZUUL=true 2025-09-17 15:33:40.485108 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:33:40.485120 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:33:40.485131 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 15:33:40.485142 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 15:33:40.485152 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 15:33:40.485163 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 15:33:40.485174 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 15:33:40.485184 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 15:33:40.485195 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 15:33:40.485206 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 15:33:40.485217 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-17 15:33:40.537933 | orchestrator | + docker version 2025-09-17 15:33:40.774992 | orchestrator | Client: Docker Engine - Community 2025-09-17 15:33:40.775082 | orchestrator | Version: 27.5.1 2025-09-17 15:33:40.775096 | orchestrator | API version: 1.47 2025-09-17 15:33:40.775107 | orchestrator | Go version: go1.22.11 2025-09-17 15:33:40.775118 | orchestrator | Git commit: 9f9e405 2025-09-17 15:33:40.775129 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-17 15:33:40.775140 | orchestrator | OS/Arch: linux/amd64 2025-09-17 15:33:40.775151 | orchestrator | Context: default 2025-09-17 15:33:40.775162 | orchestrator | 2025-09-17 15:33:40.775174 | orchestrator | Server: Docker Engine - Community 2025-09-17 15:33:40.775185 | orchestrator | Engine: 2025-09-17 15:33:40.775196 | orchestrator | Version: 27.5.1 2025-09-17 15:33:40.775207 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-17 15:33:40.775250 | orchestrator | Go version: go1.22.11 2025-09-17 15:33:40.775261 | orchestrator | Git commit: 4c9b3b0 2025-09-17 15:33:40.775272 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-17 15:33:40.775283 | orchestrator | OS/Arch: linux/amd64 2025-09-17 15:33:40.775293 | orchestrator | Experimental: false 2025-09-17 15:33:40.775304 | orchestrator | containerd: 2025-09-17 15:33:40.775315 | orchestrator | Version: 1.7.27 2025-09-17 15:33:40.775326 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-17 15:33:40.775390 | orchestrator | runc: 2025-09-17 15:33:40.775403 | orchestrator | Version: 1.2.5 2025-09-17 15:33:40.775414 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-17 15:33:40.775425 | orchestrator | docker-init: 2025-09-17 15:33:40.775436 | orchestrator | Version: 0.19.0 2025-09-17 15:33:40.775447 | orchestrator | GitCommit: de40ad0 2025-09-17 15:33:40.778833 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-17 15:33:40.786608 | orchestrator | + set -e 2025-09-17 15:33:40.786664 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 15:33:40.786680 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 15:33:40.786693 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 15:33:40.786705 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 15:33:40.786716 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 15:33:40.786728 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 15:33:40.786739 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 15:33:40.786751 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 15:33:40.786762 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 15:33:40.786773 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 15:33:40.786783 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 15:33:40.786841 | orchestrator | ++ export ARA=false 2025-09-17 15:33:40.786855 | orchestrator | ++ ARA=false 2025-09-17 15:33:40.786865 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 15:33:40.786876 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 15:33:40.786887 | orchestrator | ++ export TEMPEST=false 2025-09-17 15:33:40.786897 | orchestrator | ++ TEMPEST=false 2025-09-17 15:33:40.786908 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 15:33:40.786918 | orchestrator | ++ IS_ZUUL=true 2025-09-17 15:33:40.786929 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:33:40.786940 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:33:40.786951 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 15:33:40.786961 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 15:33:40.786972 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 15:33:40.786983 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 15:33:40.786994 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 15:33:40.787005 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 15:33:40.787015 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 15:33:40.787026 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 15:33:40.787037 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 15:33:40.787047 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 15:33:40.787058 | orchestrator | ++ INTERACTIVE=false 2025-09-17 15:33:40.787069 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 15:33:40.787084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 15:33:40.787098 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-17 15:33:40.787110 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-09-17 15:33:40.793898 | orchestrator | + set -e 2025-09-17 15:33:40.793939 | orchestrator | + VERSION=9.2.0 2025-09-17 15:33:40.793957 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-09-17 15:33:40.803690 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-17 15:33:40.803751 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-17 15:33:40.808387 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-17 15:33:40.812161 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-09-17 15:33:40.819947 | orchestrator | /opt/configuration ~ 2025-09-17 15:33:40.819991 | orchestrator | + set -e 2025-09-17 15:33:40.820005 | orchestrator | + pushd /opt/configuration 2025-09-17 15:33:40.820016 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 15:33:40.822830 | orchestrator | + source /opt/venv/bin/activate 2025-09-17 15:33:40.823938 | orchestrator | ++ deactivate nondestructive 2025-09-17 15:33:40.823978 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:40.823994 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:40.824034 | orchestrator | ++ hash -r 2025-09-17 15:33:40.824046 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:40.824057 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-17 15:33:40.824067 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-17 15:33:40.824078 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-17 15:33:40.824228 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-17 15:33:40.824243 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-17 15:33:40.824254 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-17 15:33:40.824265 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-17 15:33:40.824281 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:33:40.824292 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:33:40.824359 | orchestrator | ++ export PATH 2025-09-17 15:33:40.824483 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:40.824502 | orchestrator | ++ '[' -z '' ']' 2025-09-17 15:33:40.824696 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-17 15:33:40.824711 | orchestrator | ++ PS1='(venv) ' 2025-09-17 15:33:40.824722 | orchestrator | ++ export PS1 2025-09-17 15:33:40.824733 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-17 15:33:40.824744 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-17 15:33:40.824789 | orchestrator | ++ hash -r 2025-09-17 15:33:40.824802 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-09-17 15:33:41.851151 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-09-17 15:33:41.851828 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-09-17 15:33:41.853336 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-09-17 15:33:41.854595 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-09-17 15:33:41.856069 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-09-17 15:33:41.865600 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-09-17 15:33:41.867086 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-09-17 15:33:41.868257 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-09-17 15:33:41.869527 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-09-17 15:33:41.898532 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-09-17 15:33:41.899934 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-09-17 15:33:41.901714 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-09-17 15:33:41.902894 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-09-17 15:33:41.906836 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-09-17 15:33:42.099062 | orchestrator | ++ which gilt 2025-09-17 15:33:42.101156 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-09-17 15:33:42.101194 | orchestrator | + /opt/venv/bin/gilt overlay 2025-09-17 15:33:42.312233 | orchestrator | osism.cfg-generics: 2025-09-17 15:33:42.475078 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-09-17 15:33:42.475190 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-09-17 15:33:42.475223 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-09-17 15:33:42.475387 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-09-17 15:33:43.093044 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-09-17 15:33:43.103547 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-09-17 15:33:43.398556 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-09-17 15:33:43.444439 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 15:33:43.444533 | orchestrator | + deactivate 2025-09-17 15:33:43.444545 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-17 15:33:43.444556 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:33:43.444565 | orchestrator | + export PATH 2025-09-17 15:33:43.444574 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-17 15:33:43.444583 | orchestrator | + '[' -n '' ']' 2025-09-17 15:33:43.444594 | orchestrator | + hash -r 2025-09-17 15:33:43.444603 | orchestrator | + '[' -n '' ']' 2025-09-17 15:33:43.444611 | orchestrator | + unset VIRTUAL_ENV 2025-09-17 15:33:43.444620 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-17 15:33:43.444629 | orchestrator | ~ 2025-09-17 15:33:43.444638 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-17 15:33:43.444647 | orchestrator | + unset -f deactivate 2025-09-17 15:33:43.444655 | orchestrator | + popd 2025-09-17 15:33:43.446278 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-17 15:33:43.446310 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-17 15:33:43.446895 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-17 15:33:43.507411 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-17 15:33:43.507501 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-17 15:33:43.507517 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-17 15:33:43.603679 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 15:33:43.603769 | orchestrator | + source /opt/venv/bin/activate 2025-09-17 15:33:43.603781 | orchestrator | ++ deactivate nondestructive 2025-09-17 15:33:43.603793 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:43.603803 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:43.603814 | orchestrator | ++ hash -r 2025-09-17 15:33:43.603825 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:43.603835 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-17 15:33:43.603845 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-17 15:33:43.603856 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-17 15:33:43.603868 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-17 15:33:43.603878 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-17 15:33:43.603888 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-17 15:33:43.603899 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-17 15:33:43.603920 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:33:43.603940 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:33:43.603991 | orchestrator | ++ export PATH 2025-09-17 15:33:43.604025 | orchestrator | ++ '[' -n '' ']' 2025-09-17 15:33:43.604037 | orchestrator | ++ '[' -z '' ']' 2025-09-17 15:33:43.604048 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-17 15:33:43.604063 | orchestrator | ++ PS1='(venv) ' 2025-09-17 15:33:43.604082 | orchestrator | ++ export PS1 2025-09-17 15:33:43.604097 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-17 15:33:43.604108 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-17 15:33:43.604118 | orchestrator | ++ hash -r 2025-09-17 15:33:43.604129 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-17 15:33:44.658496 | orchestrator | 2025-09-17 15:33:44.658598 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-17 15:33:44.658613 | orchestrator | 2025-09-17 15:33:44.658625 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 15:33:45.227834 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:45.227943 | orchestrator | 2025-09-17 15:33:45.227962 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-17 15:33:46.167101 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:46.167198 | orchestrator | 2025-09-17 15:33:46.167214 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-17 15:33:46.167226 | orchestrator | 2025-09-17 15:33:46.167237 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:33:48.302701 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:48.302823 | orchestrator | 2025-09-17 15:33:48.302840 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-17 15:33:48.355006 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:48.355093 | orchestrator | 2025-09-17 15:33:48.355108 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-17 15:33:48.795709 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:48.795809 | orchestrator | 2025-09-17 15:33:48.795828 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-17 15:33:48.828718 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:33:48.828757 | orchestrator | 2025-09-17 15:33:48.828769 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-17 15:33:49.160865 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:49.160953 | orchestrator | 2025-09-17 15:33:49.160966 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-17 15:33:49.219646 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:33:49.219708 | orchestrator | 2025-09-17 15:33:49.219720 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-17 15:33:49.537324 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:49.537470 | orchestrator | 2025-09-17 15:33:49.537485 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-17 15:33:49.648478 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:33:49.648540 | orchestrator | 2025-09-17 15:33:49.648552 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-17 15:33:49.648564 | orchestrator | 2025-09-17 15:33:49.648575 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:33:51.332839 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:51.332944 | orchestrator | 2025-09-17 15:33:51.332961 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-17 15:33:51.428026 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-17 15:33:51.428108 | orchestrator | 2025-09-17 15:33:51.428122 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-17 15:33:51.480698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-17 15:33:51.480764 | orchestrator | 2025-09-17 15:33:51.480780 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-17 15:33:52.558269 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-17 15:33:52.558412 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-17 15:33:52.558436 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-17 15:33:52.558455 | orchestrator | 2025-09-17 15:33:52.558473 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-17 15:33:54.320465 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-17 15:33:54.320570 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-17 15:33:54.320585 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-17 15:33:54.320596 | orchestrator | 2025-09-17 15:33:54.320607 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-17 15:33:54.910803 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:33:54.910901 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:54.910917 | orchestrator | 2025-09-17 15:33:54.910929 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-17 15:33:55.532787 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:33:55.532883 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:55.532899 | orchestrator | 2025-09-17 15:33:55.532911 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-17 15:33:55.570861 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:33:55.570895 | orchestrator | 2025-09-17 15:33:55.570906 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-17 15:33:55.915507 | orchestrator | ok: [testbed-manager] 2025-09-17 15:33:55.915612 | orchestrator | 2025-09-17 15:33:55.915634 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-17 15:33:55.988628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-17 15:33:55.988717 | orchestrator | 2025-09-17 15:33:55.988731 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-17 15:33:57.032198 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:57.032300 | orchestrator | 2025-09-17 15:33:57.032317 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-17 15:33:57.807138 | orchestrator | changed: [testbed-manager] 2025-09-17 15:33:57.807236 | orchestrator | 2025-09-17 15:33:57.807251 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-17 15:34:08.992257 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:08.992418 | orchestrator | 2025-09-17 15:34:08.992459 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-17 15:34:09.047771 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:34:09.047872 | orchestrator | 2025-09-17 15:34:09.047889 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-17 15:34:09.047902 | orchestrator | 2025-09-17 15:34:09.047914 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:34:10.820280 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:10.820432 | orchestrator | 2025-09-17 15:34:10.820450 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-17 15:34:10.932881 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-17 15:34:10.932931 | orchestrator | 2025-09-17 15:34:10.932943 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-17 15:34:10.991320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 15:34:10.991395 | orchestrator | 2025-09-17 15:34:10.991409 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-17 15:34:13.572270 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:13.572489 | orchestrator | 2025-09-17 15:34:13.572513 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-17 15:34:13.630289 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:13.630325 | orchestrator | 2025-09-17 15:34:13.630337 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-17 15:34:13.762704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-17 15:34:13.762777 | orchestrator | 2025-09-17 15:34:13.762791 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-17 15:34:16.715090 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-17 15:34:16.715203 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-17 15:34:16.715220 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-17 15:34:16.715233 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-17 15:34:16.715244 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-17 15:34:16.715255 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-17 15:34:16.715266 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-17 15:34:16.715277 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-17 15:34:16.715289 | orchestrator | 2025-09-17 15:34:16.715304 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-17 15:34:17.366479 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:17.366582 | orchestrator | 2025-09-17 15:34:17.366597 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-17 15:34:17.995076 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:17.995167 | orchestrator | 2025-09-17 15:34:17.995180 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-17 15:34:18.064433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-17 15:34:18.064491 | orchestrator | 2025-09-17 15:34:18.064505 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-17 15:34:19.261989 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-17 15:34:19.262128 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-17 15:34:19.262143 | orchestrator | 2025-09-17 15:34:19.262155 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-17 15:34:19.900277 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:19.900425 | orchestrator | 2025-09-17 15:34:19.900445 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-17 15:34:19.956553 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:34:19.956613 | orchestrator | 2025-09-17 15:34:19.956628 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-17 15:34:20.037838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-17 15:34:20.037915 | orchestrator | 2025-09-17 15:34:20.037928 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-17 15:34:20.653714 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:20.653813 | orchestrator | 2025-09-17 15:34:20.653829 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-17 15:34:20.715938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-17 15:34:20.715985 | orchestrator | 2025-09-17 15:34:20.716001 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-17 15:34:22.096251 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:34:22.096348 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:34:22.096363 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:22.096424 | orchestrator | 2025-09-17 15:34:22.096437 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-17 15:34:22.716637 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:22.716706 | orchestrator | 2025-09-17 15:34:22.716721 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-17 15:34:22.766134 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:34:22.766163 | orchestrator | 2025-09-17 15:34:22.766178 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-17 15:34:22.854209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-17 15:34:22.854239 | orchestrator | 2025-09-17 15:34:22.854252 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-17 15:34:23.395567 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:23.395648 | orchestrator | 2025-09-17 15:34:23.395657 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-17 15:34:23.804187 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:23.804278 | orchestrator | 2025-09-17 15:34:23.804292 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-17 15:34:25.050467 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-17 15:34:25.050534 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-17 15:34:25.050540 | orchestrator | 2025-09-17 15:34:25.050546 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-17 15:34:25.690274 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:25.690362 | orchestrator | 2025-09-17 15:34:25.690436 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-17 15:34:26.093600 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:26.093681 | orchestrator | 2025-09-17 15:34:26.093694 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-17 15:34:26.467877 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:26.467969 | orchestrator | 2025-09-17 15:34:26.467983 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-17 15:34:26.517464 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:34:26.517549 | orchestrator | 2025-09-17 15:34:26.517565 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-17 15:34:26.588200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-17 15:34:26.588326 | orchestrator | 2025-09-17 15:34:26.588342 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-17 15:34:26.634966 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:26.635002 | orchestrator | 2025-09-17 15:34:26.635014 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-17 15:34:28.646726 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-17 15:34:28.646837 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-17 15:34:28.646851 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-17 15:34:28.646865 | orchestrator | 2025-09-17 15:34:28.646878 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-17 15:34:29.347819 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:29.347924 | orchestrator | 2025-09-17 15:34:29.347940 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-17 15:34:30.076178 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:30.076274 | orchestrator | 2025-09-17 15:34:30.076290 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-17 15:34:30.779655 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:30.779746 | orchestrator | 2025-09-17 15:34:30.779758 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-17 15:34:30.861647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-17 15:34:30.861712 | orchestrator | 2025-09-17 15:34:30.861723 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-17 15:34:30.900261 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:30.900310 | orchestrator | 2025-09-17 15:34:30.900321 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-17 15:34:31.600529 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-17 15:34:31.600627 | orchestrator | 2025-09-17 15:34:31.600640 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-17 15:34:31.681907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-17 15:34:31.681970 | orchestrator | 2025-09-17 15:34:31.681987 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-17 15:34:32.394499 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:32.394592 | orchestrator | 2025-09-17 15:34:32.394604 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-17 15:34:32.977607 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:32.977720 | orchestrator | 2025-09-17 15:34:32.977737 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-17 15:34:33.024596 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:34:33.024643 | orchestrator | 2025-09-17 15:34:33.024659 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-17 15:34:33.082110 | orchestrator | ok: [testbed-manager] 2025-09-17 15:34:33.082172 | orchestrator | 2025-09-17 15:34:33.082186 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-17 15:34:33.914244 | orchestrator | changed: [testbed-manager] 2025-09-17 15:34:33.915122 | orchestrator | 2025-09-17 15:34:33.915158 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-17 15:35:38.798864 | orchestrator | changed: [testbed-manager] 2025-09-17 15:35:38.798981 | orchestrator | 2025-09-17 15:35:38.798998 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-17 15:35:39.776297 | orchestrator | ok: [testbed-manager] 2025-09-17 15:35:39.776482 | orchestrator | 2025-09-17 15:35:39.776514 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-17 15:35:39.833889 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:35:39.833969 | orchestrator | 2025-09-17 15:35:39.833979 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-17 15:35:42.467100 | orchestrator | changed: [testbed-manager] 2025-09-17 15:35:42.467207 | orchestrator | 2025-09-17 15:35:42.467224 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-17 15:35:42.557755 | orchestrator | ok: [testbed-manager] 2025-09-17 15:35:42.557844 | orchestrator | 2025-09-17 15:35:42.557858 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-17 15:35:42.557871 | orchestrator | 2025-09-17 15:35:42.557882 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-17 15:35:42.606291 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:35:42.606392 | orchestrator | 2025-09-17 15:35:42.606446 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-17 15:36:42.656987 | orchestrator | Pausing for 60 seconds 2025-09-17 15:36:42.657154 | orchestrator | changed: [testbed-manager] 2025-09-17 15:36:42.657183 | orchestrator | 2025-09-17 15:36:42.657205 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-17 15:36:46.736118 | orchestrator | changed: [testbed-manager] 2025-09-17 15:36:46.736253 | orchestrator | 2025-09-17 15:36:46.736268 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-17 15:37:28.260488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-17 15:37:28.260623 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-17 15:37:28.260640 | orchestrator | changed: [testbed-manager] 2025-09-17 15:37:28.260654 | orchestrator | 2025-09-17 15:37:28.260666 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-17 15:37:37.275692 | orchestrator | changed: [testbed-manager] 2025-09-17 15:37:37.275843 | orchestrator | 2025-09-17 15:37:37.275862 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-17 15:37:37.352656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-17 15:37:37.352763 | orchestrator | 2025-09-17 15:37:37.352779 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-17 15:37:37.352791 | orchestrator | 2025-09-17 15:37:37.352803 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-17 15:37:37.399382 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:37:37.399420 | orchestrator | 2025-09-17 15:37:37.399433 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:37:37.399493 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-17 15:37:37.399513 | orchestrator | 2025-09-17 15:37:37.492294 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-17 15:37:37.492739 | orchestrator | + deactivate 2025-09-17 15:37:37.492764 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-17 15:37:37.492778 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-17 15:37:37.492790 | orchestrator | + export PATH 2025-09-17 15:37:37.492802 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-17 15:37:37.492813 | orchestrator | + '[' -n '' ']' 2025-09-17 15:37:37.492824 | orchestrator | + hash -r 2025-09-17 15:37:37.492835 | orchestrator | + '[' -n '' ']' 2025-09-17 15:37:37.492846 | orchestrator | + unset VIRTUAL_ENV 2025-09-17 15:37:37.492856 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-17 15:37:37.492867 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-17 15:37:37.492878 | orchestrator | + unset -f deactivate 2025-09-17 15:37:37.492889 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-17 15:37:37.499724 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-17 15:37:37.499756 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-17 15:37:37.499767 | orchestrator | + local max_attempts=60 2025-09-17 15:37:37.499778 | orchestrator | + local name=ceph-ansible 2025-09-17 15:37:37.499790 | orchestrator | + local attempt_num=1 2025-09-17 15:37:37.500627 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:37:37.540407 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:37:37.540470 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-17 15:37:37.540484 | orchestrator | + local max_attempts=60 2025-09-17 15:37:37.540496 | orchestrator | + local name=kolla-ansible 2025-09-17 15:37:37.540545 | orchestrator | + local attempt_num=1 2025-09-17 15:37:37.541319 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-17 15:37:37.575645 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:37:37.575679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-17 15:37:37.575690 | orchestrator | + local max_attempts=60 2025-09-17 15:37:37.575702 | orchestrator | + local name=osism-ansible 2025-09-17 15:37:37.575713 | orchestrator | + local attempt_num=1 2025-09-17 15:37:37.575724 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-17 15:37:37.606147 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:37:37.606171 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-17 15:37:37.606182 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-17 15:37:38.323687 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-17 15:37:38.549413 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-17 15:37:38.549556 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549571 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549583 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-17 15:37:38.549597 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-17 15:37:38.549608 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549619 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549630 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-17 15:37:38.549641 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549651 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-17 15:37:38.549662 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549673 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-17 15:37:38.549683 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549694 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-17 15:37:38.549705 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.549750 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-17 15:37:38.556626 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-17 15:37:38.584710 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-17 15:37:38.584762 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-17 15:37:38.586358 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-17 15:37:50.671582 | orchestrator | 2025-09-17 15:37:50 | INFO  | Task 5d16491b-c79d-4b53-b4fa-d3d1ce78e92e (resolvconf) was prepared for execution. 2025-09-17 15:37:50.671731 | orchestrator | 2025-09-17 15:37:50 | INFO  | It takes a moment until task 5d16491b-c79d-4b53-b4fa-d3d1ce78e92e (resolvconf) has been started and output is visible here. 2025-09-17 15:38:04.820011 | orchestrator | 2025-09-17 15:38:04.820130 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-17 15:38:04.820146 | orchestrator | 2025-09-17 15:38:04.820158 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:38:04.820170 | orchestrator | Wednesday 17 September 2025 15:37:54 +0000 (0:00:00.109) 0:00:00.109 *** 2025-09-17 15:38:04.820181 | orchestrator | ok: [testbed-manager] 2025-09-17 15:38:04.820193 | orchestrator | 2025-09-17 15:38:04.820204 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-17 15:38:04.820216 | orchestrator | Wednesday 17 September 2025 15:37:59 +0000 (0:00:05.322) 0:00:05.431 *** 2025-09-17 15:38:04.820227 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:38:04.820238 | orchestrator | 2025-09-17 15:38:04.820249 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-17 15:38:04.820260 | orchestrator | Wednesday 17 September 2025 15:37:59 +0000 (0:00:00.049) 0:00:05.481 *** 2025-09-17 15:38:04.820271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-17 15:38:04.820283 | orchestrator | 2025-09-17 15:38:04.820294 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-17 15:38:04.820305 | orchestrator | Wednesday 17 September 2025 15:37:59 +0000 (0:00:00.090) 0:00:05.572 *** 2025-09-17 15:38:04.820316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 15:38:04.820327 | orchestrator | 2025-09-17 15:38:04.820338 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-17 15:38:04.820349 | orchestrator | Wednesday 17 September 2025 15:37:59 +0000 (0:00:00.073) 0:00:05.645 *** 2025-09-17 15:38:04.820360 | orchestrator | ok: [testbed-manager] 2025-09-17 15:38:04.820370 | orchestrator | 2025-09-17 15:38:04.820381 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-17 15:38:04.820392 | orchestrator | Wednesday 17 September 2025 15:38:00 +0000 (0:00:00.950) 0:00:06.596 *** 2025-09-17 15:38:04.820403 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:38:04.820414 | orchestrator | 2025-09-17 15:38:04.820425 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-17 15:38:04.820436 | orchestrator | Wednesday 17 September 2025 15:38:00 +0000 (0:00:00.054) 0:00:06.650 *** 2025-09-17 15:38:04.820447 | orchestrator | ok: [testbed-manager] 2025-09-17 15:38:04.820503 | orchestrator | 2025-09-17 15:38:04.820515 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-17 15:38:04.820527 | orchestrator | Wednesday 17 September 2025 15:38:01 +0000 (0:00:00.421) 0:00:07.072 *** 2025-09-17 15:38:04.820538 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:38:04.820549 | orchestrator | 2025-09-17 15:38:04.820560 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-17 15:38:04.820595 | orchestrator | Wednesday 17 September 2025 15:38:01 +0000 (0:00:00.073) 0:00:07.145 *** 2025-09-17 15:38:04.820606 | orchestrator | changed: [testbed-manager] 2025-09-17 15:38:04.820617 | orchestrator | 2025-09-17 15:38:04.820628 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-17 15:38:04.820638 | orchestrator | Wednesday 17 September 2025 15:38:01 +0000 (0:00:00.447) 0:00:07.593 *** 2025-09-17 15:38:04.820649 | orchestrator | changed: [testbed-manager] 2025-09-17 15:38:04.820660 | orchestrator | 2025-09-17 15:38:04.820670 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-17 15:38:04.820681 | orchestrator | Wednesday 17 September 2025 15:38:02 +0000 (0:00:00.938) 0:00:08.532 *** 2025-09-17 15:38:04.820692 | orchestrator | ok: [testbed-manager] 2025-09-17 15:38:04.820703 | orchestrator | 2025-09-17 15:38:04.820714 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-17 15:38:04.820735 | orchestrator | Wednesday 17 September 2025 15:38:03 +0000 (0:00:00.830) 0:00:09.363 *** 2025-09-17 15:38:04.820747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-17 15:38:04.820758 | orchestrator | 2025-09-17 15:38:04.820769 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-17 15:38:04.820779 | orchestrator | Wednesday 17 September 2025 15:38:03 +0000 (0:00:00.077) 0:00:09.441 *** 2025-09-17 15:38:04.820790 | orchestrator | changed: [testbed-manager] 2025-09-17 15:38:04.820800 | orchestrator | 2025-09-17 15:38:04.820811 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:38:04.820823 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 15:38:04.820833 | orchestrator | 2025-09-17 15:38:04.820844 | orchestrator | 2025-09-17 15:38:04.820855 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:38:04.820865 | orchestrator | Wednesday 17 September 2025 15:38:04 +0000 (0:00:01.026) 0:00:10.467 *** 2025-09-17 15:38:04.820876 | orchestrator | =============================================================================== 2025-09-17 15:38:04.820886 | orchestrator | Gathering Facts --------------------------------------------------------- 5.32s 2025-09-17 15:38:04.820897 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.03s 2025-09-17 15:38:04.820907 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.95s 2025-09-17 15:38:04.820918 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.94s 2025-09-17 15:38:04.820929 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.83s 2025-09-17 15:38:04.820940 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.45s 2025-09-17 15:38:04.820967 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.42s 2025-09-17 15:38:04.820979 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-17 15:38:04.820990 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-17 15:38:04.821000 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-17 15:38:04.821011 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-09-17 15:38:04.821022 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-09-17 15:38:04.821033 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-09-17 15:38:04.975666 | orchestrator | + osism apply sshconfig 2025-09-17 15:38:16.623920 | orchestrator | 2025-09-17 15:38:16 | INFO  | Task 7ee53dbc-4b77-4d23-a1c6-8e052cf1b0a2 (sshconfig) was prepared for execution. 2025-09-17 15:38:16.624021 | orchestrator | 2025-09-17 15:38:16 | INFO  | It takes a moment until task 7ee53dbc-4b77-4d23-a1c6-8e052cf1b0a2 (sshconfig) has been started and output is visible here. 2025-09-17 15:38:27.575549 | orchestrator | 2025-09-17 15:38:27.575719 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-17 15:38:27.575738 | orchestrator | 2025-09-17 15:38:27.575751 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-17 15:38:27.575763 | orchestrator | Wednesday 17 September 2025 15:38:20 +0000 (0:00:00.184) 0:00:00.185 *** 2025-09-17 15:38:27.575775 | orchestrator | ok: [testbed-manager] 2025-09-17 15:38:27.575788 | orchestrator | 2025-09-17 15:38:27.575854 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-17 15:38:27.575868 | orchestrator | Wednesday 17 September 2025 15:38:20 +0000 (0:00:00.552) 0:00:00.737 *** 2025-09-17 15:38:27.575879 | orchestrator | changed: [testbed-manager] 2025-09-17 15:38:27.575891 | orchestrator | 2025-09-17 15:38:27.575902 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-17 15:38:27.575913 | orchestrator | Wednesday 17 September 2025 15:38:21 +0000 (0:00:00.483) 0:00:01.221 *** 2025-09-17 15:38:27.575924 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-17 15:38:27.575936 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-17 15:38:27.575948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-17 15:38:27.575959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-17 15:38:27.575969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-17 15:38:27.575980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-17 15:38:27.575991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-17 15:38:27.576002 | orchestrator | 2025-09-17 15:38:27.576013 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-17 15:38:27.576024 | orchestrator | Wednesday 17 September 2025 15:38:26 +0000 (0:00:05.448) 0:00:06.669 *** 2025-09-17 15:38:27.576058 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:38:27.576072 | orchestrator | 2025-09-17 15:38:27.576085 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-17 15:38:27.576098 | orchestrator | Wednesday 17 September 2025 15:38:26 +0000 (0:00:00.067) 0:00:06.736 *** 2025-09-17 15:38:27.576110 | orchestrator | changed: [testbed-manager] 2025-09-17 15:38:27.576122 | orchestrator | 2025-09-17 15:38:27.576135 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:38:27.576149 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:38:27.576162 | orchestrator | 2025-09-17 15:38:27.576175 | orchestrator | 2025-09-17 15:38:27.576188 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:38:27.576201 | orchestrator | Wednesday 17 September 2025 15:38:27 +0000 (0:00:00.565) 0:00:07.302 *** 2025-09-17 15:38:27.576213 | orchestrator | =============================================================================== 2025-09-17 15:38:27.576225 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.45s 2025-09-17 15:38:27.576237 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-09-17 15:38:27.576249 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-09-17 15:38:27.576261 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-09-17 15:38:27.576273 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-17 15:38:27.805105 | orchestrator | + osism apply known-hosts 2025-09-17 15:38:39.596088 | orchestrator | 2025-09-17 15:38:39 | INFO  | Task 6516c446-3323-458a-8acb-134479b281b5 (known-hosts) was prepared for execution. 2025-09-17 15:38:39.596222 | orchestrator | 2025-09-17 15:38:39 | INFO  | It takes a moment until task 6516c446-3323-458a-8acb-134479b281b5 (known-hosts) has been started and output is visible here. 2025-09-17 15:38:55.996028 | orchestrator | 2025-09-17 15:38:55.996114 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-17 15:38:55.996126 | orchestrator | 2025-09-17 15:38:55.996135 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-17 15:38:55.996145 | orchestrator | Wednesday 17 September 2025 15:38:43 +0000 (0:00:00.122) 0:00:00.122 *** 2025-09-17 15:38:55.996154 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-17 15:38:55.996163 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-17 15:38:55.996172 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-17 15:38:55.996181 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-17 15:38:55.996190 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-17 15:38:55.996198 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-17 15:38:55.996207 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-17 15:38:55.996216 | orchestrator | 2025-09-17 15:38:55.996225 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-17 15:38:55.996234 | orchestrator | Wednesday 17 September 2025 15:38:48 +0000 (0:00:05.522) 0:00:05.645 *** 2025-09-17 15:38:55.996243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-17 15:38:55.996253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-17 15:38:55.996262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-17 15:38:55.996271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-17 15:38:55.996280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-17 15:38:55.996288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-17 15:38:55.996297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-17 15:38:55.996306 | orchestrator | 2025-09-17 15:38:55.996315 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996324 | orchestrator | Wednesday 17 September 2025 15:38:48 +0000 (0:00:00.157) 0:00:05.802 *** 2025-09-17 15:38:55.996333 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFU818mW5IeIcF2Ee02G6GczbQgsZPJllHa2SQ3JlUnh) 2025-09-17 15:38:55.996393 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnIGR9RTmLpsO7SVcOuOg7zviLQwqSJqwebBI4zc9mjvEkJRg2qLZ192u4vUaUmlTO5cS/eURoZXfw8XMo0d/RHTGacfzUBQYck69ZXNkwuAAcmJC1teAnrdME29mpaSLyervLnUhiMi3btmMBv2totlXts1h2KHqglsvbHAkUTJhd9cjNbwUm0RkqAzROXljGjYxLoqkgmAjUzTEUGaQhrCSKA7WaOKrymTS2ZjLTQl+XXVOWLGp1ZmH/P+A0Lwm/x3PLuMdOiuAWa2EqgEBF7VI0+Z6x7fMAuI4djp8gwfh0sgxSxefOTnVqJirifk01WepXu2wjKirhglLg1rVnWzFn7bHbX+TXxfxAdxytkzYAN/NitvAp8T6xAdCIgI56mMBa2KLWzJI9k5dlzV94Kb2Vx1CCqX7Bv7VC4OKmidoXAmTEWGjB2WBFVRFsOLkC9wVtxRCDPgMsCXrG3D6WIPsHyoTU081o0/viVY2+FkhAsUAxcviCHBLx7rQCWac=) 2025-09-17 15:38:55.996407 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQtF8qWpBGSVlAhEkfaEq1zYXWqDEdtDR+FQjoCRof/r3Os1ScCxW2AJ5k8DyttIa1bOZl62xsH6EK4fAX/LX0=) 2025-09-17 15:38:55.996439 | orchestrator | 2025-09-17 15:38:55.996449 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996458 | orchestrator | Wednesday 17 September 2025 15:38:50 +0000 (0:00:02.139) 0:00:07.942 *** 2025-09-17 15:38:55.996518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvpzt5hj73yj09AwjmQG4UdObpJUnsg+AUL1ceJSoWpCVh78j5y6mLFTHvOsUl4UC0JRqIYNbYPA4WDnDlnnMceTetIMCHRpfT0uWmF3f47c/+NCegQXUWfe8ZFBwcC0D4TfFbLHZeozWQLsgb/ZQeESiywImYDqSIKu9NCaWEv7JCAeoOon1DgqXMtwRc2AOciGWoLnQ5KqPasnBiwnvYbk7VQtNTaR7OUVrtt2W83wQXyD/jX8QQMXQqYyr/yeSBoioBVyOurShFPYMnAoMA3lYU1N7vxVZtnLI2Bq8epffzGB+Du2PBI9zAjQ5l7pDsTErB4GDM7W2ieHmTjIlWvdaK64cWwaD0Zl1cJS9lc3Zm7EuZ045YKfyj8OYD9+W6gBsggoV6z7W01sCL90vV1NcquAcl3g9BV/LYPRsFDXlutkBnvOtkrQ8mGTOpHGpIVjjQjNw+MLQdZVCwHVe+asoczHmjLfn0yyL80DKR8avnalyhZVvPsNdLOXulGxs=) 2025-09-17 15:38:55.996530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6bh95lytf77q0ku1dsrf7unqb/d2QtjT626CjpDgGjBhfjPDDAN0MAePwtvRSw8UaHdhRaggL7v49wrfhUL8I=) 2025-09-17 15:38:55.996539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH724m6GLb4uc/jH4/WLiTGoICLnk1te+4EE8HehTYjf) 2025-09-17 15:38:55.996547 | orchestrator | 2025-09-17 15:38:55.996556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996565 | orchestrator | Wednesday 17 September 2025 15:38:51 +0000 (0:00:00.993) 0:00:08.936 *** 2025-09-17 15:38:55.996574 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKSKcj0aHrOO6pAouByr3LPmpGBLBtu7JdLF9rwG48ZC+5Cx3EI/wKnPMgUV/nX0/g+pPDE+5vGsZfVfBti7CF8OTYw5OYSJU4/7ShUFSMSRA+qWiHQv9ZP01kE4mhCprnlZTVescvhIZklP4FXTOrRcNeJJzqNjNK0BHTgR9GJNDEFBe2H4TiOKJ+GVWs6MhhBzkiNiwkQMpHk9Xl1YJooE+T6kxrW4A19Ur6IMiU2OTucuAn6p0ZTpoqaO05EAvOr+fDhcHBqQzhdX6A3Ak25R5Wj/DCH5SGg3khvYbNFETfD14ZblKnNiTXPfjpc9dqDs08IqSPqnOZS9YktOJNRTzvpBNUCX74dKc76youBFxHFuR6nq+GiCAZDNmIm1t6pbhHD+ks50KY/KvwnHR+kzUb/7wJT0g5cJ1YFNzVR1XEdy9st99Erx6xWTV3IS5lzNrSYGJwkYf53UTuc5WbohNk3PVPwuG/a+nkTPacjG1NzZ28X1tSO5IkpxqDVvk=) 2025-09-17 15:38:55.996584 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDk9NynPFJJtVWGbtNCyb4PtLXfl9gz5pftlk356Z5F4V2jaAdUq50iR5eXcXOAEB5WUeC1t2kNHtarkxOIwBG8=) 2025-09-17 15:38:55.996595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBDHEHR30EnNU3NwyT/nl+3y83IcPZLVkKtfw+qblAl) 2025-09-17 15:38:55.996605 | orchestrator | 2025-09-17 15:38:55.996615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996625 | orchestrator | Wednesday 17 September 2025 15:38:52 +0000 (0:00:00.991) 0:00:09.927 *** 2025-09-17 15:38:55.996634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSNICjkZ5tia/hD7VLMj1gNr57NsULEcWWXfBpi6ENtfTRrTMKSBvy+f2AsRZQR6F3k0tFTyGUbbOdTXpndpaJF+Cd/MCWbXTCuN1oj221bbC19BmYcoSOFxAARyc1OxjTFSj/dfTMwMYw7RXKtRtgtFmxX05BkAt2cBDL/x2/MWqrT361y5NrNWJZ+hcCDI7GtD2Kvmpb8MmNKR1teb1LhyOQmpYqgQ+kInazQxh3fP3BgBw64WogcRvb0qlFTT65uFvptMsl/ImZPjTVeMG3cN7AFZMRZmxH+gzm8NUgU5YCadGHtAxKrn34C0j355xW9HXceRtkGJkvD/aWFWwP4zCdL1euznMSEp3CCE2lPZXDsgsDsrE+HO5ncLWzp+wvPlkdfn3t15svyH+Je6GQEwU8jdJbxxZRFe146IrK9miZBWdQLpkCH5/Le2cUhU3zKFRQhLlPx4KooBTwGmqADrsXLv+wr1ThWFx2H4vJI0O62HWGZHUY2eshLFA44KM=) 2025-09-17 15:38:55.996646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNb6SmILXco/Fw0ZGZBkDVLSdwmxqfzre1/dHRUlwsL2d4yUuskdql+PAe4yNvxB0j6DIU+fkauyuNc7LzFAE9U=) 2025-09-17 15:38:55.996656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAvUDjukZInnm+k9434TPItpm66DjPGw6Jen6AYFaGzw) 2025-09-17 15:38:55.996673 | orchestrator | 2025-09-17 15:38:55.996682 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996691 | orchestrator | Wednesday 17 September 2025 15:38:54 +0000 (0:00:01.037) 0:00:10.965 *** 2025-09-17 15:38:55.996700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUICM7p1q/CbjQUFemNLKMVqTeLLbrvdnGAA1GmS/8v9i4JssF2gE6btjnKKJvuo7oOye5IVAF+nUOPwbD6q+Chu3vYKUUDx6deoPCzVrTKISak92SEz4rQxmxFWZVD/IHYhCMMmLqmJ/n9R6U1So3ZT1kHq8VnwqOX7OndsVzZqG/28xcTNriOts1TtGwTEXs4sX8riZpAamAe2Nc/XuRhoZoVdeKgvc8e2sfJKfarQtuOoSSJPotuC4IX9P7R+Sv1KLlbLVTpLnXqL1vENuxvqjlcxr3ekIKOkyNdwgQfo4U0kYksXSqoxeifekKoh8BEoC/D6KXX2aEXkO0CMY2moWbZ24bDReFTpq498oHzj5u2xrbvfoPgrWsxVHauXH9n/cl5+y0yyykmcQZ2IegO4I4fhJhCMcAlrfBrxZPY6ot55nIVAbsGWeH4noUUh/IVgJAAe9cY3E1MUIV8Lg3dvkOkSsIAdR/c4V6ofQtK48J2g2AAeCp1/LJmP3SsH0=) 2025-09-17 15:38:55.996714 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhysDE6dAZ3srtLbRgozz07DA7JMCHDMIgRK4qf4+hQddCCtglGI8WQn+KMe/V5V0UPYSy52LUr9zUZtMhxC8U=) 2025-09-17 15:38:55.996723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2hNWkEV7jNJnTWjO4TfOn/bfqFeSX5Xk9Q1S+UdiSm) 2025-09-17 15:38:55.996732 | orchestrator | 2025-09-17 15:38:55.996740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:38:55.996749 | orchestrator | Wednesday 17 September 2025 15:38:55 +0000 (0:00:01.000) 0:00:11.965 *** 2025-09-17 15:38:55.996763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSxMHXbZwbCn0xXGy5ex7aGRxwl0vUhdRZhz++Xbnikcdy74D70AyUnQW+dXeFblNPwfBI3JIt3TWY56jsqlzlP8EATXSUOpLKfq/ukjHWyy+47cqC8fUiknbUwM0zYNrTUEIb3hsl3yXwO5UCl/EbJQUE/mEx/XUmmoABcfFZ4lHae8wCb99Yzc9ok8ChW5tS3fMrt6HhWdbSEsb/L5CJrkDAWxViDwVeXCTI+VqAA2D+02TZM1Oh283nBRPjC0nIgPBotBTPiG2BrZY9+5H8eW/X5HoPTxnXxBoYnAwTEMi85KYz7H6RzjIQd7ux5UpbXtlb4t1/jkEb7RYWhwD5xs+V04qhBN1yFfjeTJJKEd1awKcLsUxXNOlb9YrfP1eEjvgALheINQhG4LSmnj3WCZIbnr2XKtRwagbdWQaH7ae30S0e8k/HNBai4vZy/eObUQ7hJ2vF5YOfYmjP5ayZEFbwGrE12NPJ6QmvfoWq6Lh7oMTUtRhxRwRHSsuz5F8=) 2025-09-17 15:39:06.213265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLhARuZunp3RsQ4LIr2j6eRIblUj0RygnGpYGOnp5whwZjSnujRVqSHsWfv32WUkVbbjTMOidD+sqhDlLu/2SJo=) 2025-09-17 15:39:06.213372 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMfqdblftvNw2zWnVN1m9Mr+vEfAdYQG0l+psIrVEnD0) 2025-09-17 15:39:06.213388 | orchestrator | 2025-09-17 15:39:06.213401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:06.213415 | orchestrator | Wednesday 17 September 2025 15:38:55 +0000 (0:00:00.978) 0:00:12.943 *** 2025-09-17 15:39:06.213426 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBeUETmC7Y+3Q+R/Uxu67qDIM+D5Cplmsti/tboavPPUl+zshMEfu9D9G65+2ReQ3w9g9Myt2FhU1T3W1ADZMM=) 2025-09-17 15:39:06.213437 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+Uxq+qW50Pp1G5v20V9wBbqj6y9VAi1TSUK9mjeNfh) 2025-09-17 15:39:06.213450 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOsxpZQ2JTr0/J8/DO7l0I6wI2Vua03Uw/wAOwrjvHkMa2cNRZsMbE/vBcrS054vkdUkGmeEmGNqYJDvPUD0H+BCXbWMSEZxH2GYw7vgNXpC6cNocgTWw8CL7fjIkZCdz0No/S9mwdwRRjSNCqn766BccpiNs7P2dfy8QxK2XhCiF2SRhfpLZbjDToICjn+CE6s55krtGEyHdE7N6prwEo6FXmmO0gctlpxvXIGJ5/XlOb7jsynbsmZ+hBO3Tq+T3oPrTo1ZOEYW/sVlv7F2aCF9YNfen0pA73+Z2xIyEM6qCkjo0CUkwPY2cWTIBb47nj7MW5q4BYv4yT/E3PGrnGgG3HYDN61sRBVlPWD5PX7hhmCpgWmly7cdIL/LoQNre73ZkGOsN0LY4H+PEDAQ6e1UmZ/DKex/VfWjEwMFphEth0BUeTFgI3/z7cUChIlqwcbMoAuhHMiSM94ZLcp4EFTi1LN094pazqIRIFTXOUQ6pjk5oYdznUiyQYat40FMU=) 2025-09-17 15:39:06.213464 | orchestrator | 2025-09-17 15:39:06.213550 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-17 15:39:06.213599 | orchestrator | Wednesday 17 September 2025 15:38:56 +0000 (0:00:00.930) 0:00:13.874 *** 2025-09-17 15:39:06.213612 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-17 15:39:06.213624 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-17 15:39:06.213635 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-17 15:39:06.213645 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-17 15:39:06.213656 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-17 15:39:06.213666 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-17 15:39:06.213677 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-17 15:39:06.213688 | orchestrator | 2025-09-17 15:39:06.213699 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-17 15:39:06.213712 | orchestrator | Wednesday 17 September 2025 15:39:01 +0000 (0:00:04.978) 0:00:18.852 *** 2025-09-17 15:39:06.213724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-17 15:39:06.213737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-17 15:39:06.213748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-17 15:39:06.213759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-17 15:39:06.213769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-17 15:39:06.213780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-17 15:39:06.213791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-17 15:39:06.213801 | orchestrator | 2025-09-17 15:39:06.213814 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:06.213827 | orchestrator | Wednesday 17 September 2025 15:39:02 +0000 (0:00:00.162) 0:00:19.014 *** 2025-09-17 15:39:06.213839 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIQtF8qWpBGSVlAhEkfaEq1zYXWqDEdtDR+FQjoCRof/r3Os1ScCxW2AJ5k8DyttIa1bOZl62xsH6EK4fAX/LX0=) 2025-09-17 15:39:06.213892 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnIGR9RTmLpsO7SVcOuOg7zviLQwqSJqwebBI4zc9mjvEkJRg2qLZ192u4vUaUmlTO5cS/eURoZXfw8XMo0d/RHTGacfzUBQYck69ZXNkwuAAcmJC1teAnrdME29mpaSLyervLnUhiMi3btmMBv2totlXts1h2KHqglsvbHAkUTJhd9cjNbwUm0RkqAzROXljGjYxLoqkgmAjUzTEUGaQhrCSKA7WaOKrymTS2ZjLTQl+XXVOWLGp1ZmH/P+A0Lwm/x3PLuMdOiuAWa2EqgEBF7VI0+Z6x7fMAuI4djp8gwfh0sgxSxefOTnVqJirifk01WepXu2wjKirhglLg1rVnWzFn7bHbX+TXxfxAdxytkzYAN/NitvAp8T6xAdCIgI56mMBa2KLWzJI9k5dlzV94Kb2Vx1CCqX7Bv7VC4OKmidoXAmTEWGjB2WBFVRFsOLkC9wVtxRCDPgMsCXrG3D6WIPsHyoTU081o0/viVY2+FkhAsUAxcviCHBLx7rQCWac=) 2025-09-17 15:39:06.213907 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFU818mW5IeIcF2Ee02G6GczbQgsZPJllHa2SQ3JlUnh) 2025-09-17 15:39:06.213921 | orchestrator | 2025-09-17 15:39:06.213933 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:06.213946 | orchestrator | Wednesday 17 September 2025 15:39:03 +0000 (0:00:01.037) 0:00:20.051 *** 2025-09-17 15:39:06.213967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG6bh95lytf77q0ku1dsrf7unqb/d2QtjT626CjpDgGjBhfjPDDAN0MAePwtvRSw8UaHdhRaggL7v49wrfhUL8I=) 2025-09-17 15:39:06.213981 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvpzt5hj73yj09AwjmQG4UdObpJUnsg+AUL1ceJSoWpCVh78j5y6mLFTHvOsUl4UC0JRqIYNbYPA4WDnDlnnMceTetIMCHRpfT0uWmF3f47c/+NCegQXUWfe8ZFBwcC0D4TfFbLHZeozWQLsgb/ZQeESiywImYDqSIKu9NCaWEv7JCAeoOon1DgqXMtwRc2AOciGWoLnQ5KqPasnBiwnvYbk7VQtNTaR7OUVrtt2W83wQXyD/jX8QQMXQqYyr/yeSBoioBVyOurShFPYMnAoMA3lYU1N7vxVZtnLI2Bq8epffzGB+Du2PBI9zAjQ5l7pDsTErB4GDM7W2ieHmTjIlWvdaK64cWwaD0Zl1cJS9lc3Zm7EuZ045YKfyj8OYD9+W6gBsggoV6z7W01sCL90vV1NcquAcl3g9BV/LYPRsFDXlutkBnvOtkrQ8mGTOpHGpIVjjQjNw+MLQdZVCwHVe+asoczHmjLfn0yyL80DKR8avnalyhZVvPsNdLOXulGxs=) 2025-09-17 15:39:06.213994 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH724m6GLb4uc/jH4/WLiTGoICLnk1te+4EE8HehTYjf) 2025-09-17 15:39:06.214006 | orchestrator | 2025-09-17 15:39:06.214072 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:06.214085 | orchestrator | Wednesday 17 September 2025 15:39:04 +0000 (0:00:01.029) 0:00:21.081 *** 2025-09-17 15:39:06.214098 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDk9NynPFJJtVWGbtNCyb4PtLXfl9gz5pftlk356Z5F4V2jaAdUq50iR5eXcXOAEB5WUeC1t2kNHtarkxOIwBG8=) 2025-09-17 15:39:06.214110 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKSKcj0aHrOO6pAouByr3LPmpGBLBtu7JdLF9rwG48ZC+5Cx3EI/wKnPMgUV/nX0/g+pPDE+5vGsZfVfBti7CF8OTYw5OYSJU4/7ShUFSMSRA+qWiHQv9ZP01kE4mhCprnlZTVescvhIZklP4FXTOrRcNeJJzqNjNK0BHTgR9GJNDEFBe2H4TiOKJ+GVWs6MhhBzkiNiwkQMpHk9Xl1YJooE+T6kxrW4A19Ur6IMiU2OTucuAn6p0ZTpoqaO05EAvOr+fDhcHBqQzhdX6A3Ak25R5Wj/DCH5SGg3khvYbNFETfD14ZblKnNiTXPfjpc9dqDs08IqSPqnOZS9YktOJNRTzvpBNUCX74dKc76youBFxHFuR6nq+GiCAZDNmIm1t6pbhHD+ks50KY/KvwnHR+kzUb/7wJT0g5cJ1YFNzVR1XEdy9st99Erx6xWTV3IS5lzNrSYGJwkYf53UTuc5WbohNk3PVPwuG/a+nkTPacjG1NzZ28X1tSO5IkpxqDVvk=) 2025-09-17 15:39:06.214123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFBDHEHR30EnNU3NwyT/nl+3y83IcPZLVkKtfw+qblAl) 2025-09-17 15:39:06.214136 | orchestrator | 2025-09-17 15:39:06.214148 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:06.214160 | orchestrator | Wednesday 17 September 2025 15:39:05 +0000 (0:00:01.040) 0:00:22.121 *** 2025-09-17 15:39:06.214171 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNb6SmILXco/Fw0ZGZBkDVLSdwmxqfzre1/dHRUlwsL2d4yUuskdql+PAe4yNvxB0j6DIU+fkauyuNc7LzFAE9U=) 2025-09-17 15:39:06.214182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSNICjkZ5tia/hD7VLMj1gNr57NsULEcWWXfBpi6ENtfTRrTMKSBvy+f2AsRZQR6F3k0tFTyGUbbOdTXpndpaJF+Cd/MCWbXTCuN1oj221bbC19BmYcoSOFxAARyc1OxjTFSj/dfTMwMYw7RXKtRtgtFmxX05BkAt2cBDL/x2/MWqrT361y5NrNWJZ+hcCDI7GtD2Kvmpb8MmNKR1teb1LhyOQmpYqgQ+kInazQxh3fP3BgBw64WogcRvb0qlFTT65uFvptMsl/ImZPjTVeMG3cN7AFZMRZmxH+gzm8NUgU5YCadGHtAxKrn34C0j355xW9HXceRtkGJkvD/aWFWwP4zCdL1euznMSEp3CCE2lPZXDsgsDsrE+HO5ncLWzp+wvPlkdfn3t15svyH+Je6GQEwU8jdJbxxZRFe146IrK9miZBWdQLpkCH5/Le2cUhU3zKFRQhLlPx4KooBTwGmqADrsXLv+wr1ThWFx2H4vJI0O62HWGZHUY2eshLFA44KM=) 2025-09-17 15:39:06.214207 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAvUDjukZInnm+k9434TPItpm66DjPGw6Jen6AYFaGzw) 2025-09-17 15:39:10.252324 | orchestrator | 2025-09-17 15:39:10.252428 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:10.252445 | orchestrator | Wednesday 17 September 2025 15:39:06 +0000 (0:00:01.036) 0:00:23.157 *** 2025-09-17 15:39:10.252458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhysDE6dAZ3srtLbRgozz07DA7JMCHDMIgRK4qf4+hQddCCtglGI8WQn+KMe/V5V0UPYSy52LUr9zUZtMhxC8U=) 2025-09-17 15:39:10.252530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2hNWkEV7jNJnTWjO4TfOn/bfqFeSX5Xk9Q1S+UdiSm) 2025-09-17 15:39:10.252547 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUICM7p1q/CbjQUFemNLKMVqTeLLbrvdnGAA1GmS/8v9i4JssF2gE6btjnKKJvuo7oOye5IVAF+nUOPwbD6q+Chu3vYKUUDx6deoPCzVrTKISak92SEz4rQxmxFWZVD/IHYhCMMmLqmJ/n9R6U1So3ZT1kHq8VnwqOX7OndsVzZqG/28xcTNriOts1TtGwTEXs4sX8riZpAamAe2Nc/XuRhoZoVdeKgvc8e2sfJKfarQtuOoSSJPotuC4IX9P7R+Sv1KLlbLVTpLnXqL1vENuxvqjlcxr3ekIKOkyNdwgQfo4U0kYksXSqoxeifekKoh8BEoC/D6KXX2aEXkO0CMY2moWbZ24bDReFTpq498oHzj5u2xrbvfoPgrWsxVHauXH9n/cl5+y0yyykmcQZ2IegO4I4fhJhCMcAlrfBrxZPY6ot55nIVAbsGWeH4noUUh/IVgJAAe9cY3E1MUIV8Lg3dvkOkSsIAdR/c4V6ofQtK48J2g2AAeCp1/LJmP3SsH0=) 2025-09-17 15:39:10.252561 | orchestrator | 2025-09-17 15:39:10.252573 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:10.252584 | orchestrator | Wednesday 17 September 2025 15:39:07 +0000 (0:00:01.021) 0:00:24.179 *** 2025-09-17 15:39:10.252595 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSxMHXbZwbCn0xXGy5ex7aGRxwl0vUhdRZhz++Xbnikcdy74D70AyUnQW+dXeFblNPwfBI3JIt3TWY56jsqlzlP8EATXSUOpLKfq/ukjHWyy+47cqC8fUiknbUwM0zYNrTUEIb3hsl3yXwO5UCl/EbJQUE/mEx/XUmmoABcfFZ4lHae8wCb99Yzc9ok8ChW5tS3fMrt6HhWdbSEsb/L5CJrkDAWxViDwVeXCTI+VqAA2D+02TZM1Oh283nBRPjC0nIgPBotBTPiG2BrZY9+5H8eW/X5HoPTxnXxBoYnAwTEMi85KYz7H6RzjIQd7ux5UpbXtlb4t1/jkEb7RYWhwD5xs+V04qhBN1yFfjeTJJKEd1awKcLsUxXNOlb9YrfP1eEjvgALheINQhG4LSmnj3WCZIbnr2XKtRwagbdWQaH7ae30S0e8k/HNBai4vZy/eObUQ7hJ2vF5YOfYmjP5ayZEFbwGrE12NPJ6QmvfoWq6Lh7oMTUtRhxRwRHSsuz5F8=) 2025-09-17 15:39:10.252607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLhARuZunp3RsQ4LIr2j6eRIblUj0RygnGpYGOnp5whwZjSnujRVqSHsWfv32WUkVbbjTMOidD+sqhDlLu/2SJo=) 2025-09-17 15:39:10.252634 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMfqdblftvNw2zWnVN1m9Mr+vEfAdYQG0l+psIrVEnD0) 2025-09-17 15:39:10.252646 | orchestrator | 2025-09-17 15:39:10.252657 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-17 15:39:10.252668 | orchestrator | Wednesday 17 September 2025 15:39:08 +0000 (0:00:01.004) 0:00:25.183 *** 2025-09-17 15:39:10.252679 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBeUETmC7Y+3Q+R/Uxu67qDIM+D5Cplmsti/tboavPPUl+zshMEfu9D9G65+2ReQ3w9g9Myt2FhU1T3W1ADZMM=) 2025-09-17 15:39:10.252691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOsxpZQ2JTr0/J8/DO7l0I6wI2Vua03Uw/wAOwrjvHkMa2cNRZsMbE/vBcrS054vkdUkGmeEmGNqYJDvPUD0H+BCXbWMSEZxH2GYw7vgNXpC6cNocgTWw8CL7fjIkZCdz0No/S9mwdwRRjSNCqn766BccpiNs7P2dfy8QxK2XhCiF2SRhfpLZbjDToICjn+CE6s55krtGEyHdE7N6prwEo6FXmmO0gctlpxvXIGJ5/XlOb7jsynbsmZ+hBO3Tq+T3oPrTo1ZOEYW/sVlv7F2aCF9YNfen0pA73+Z2xIyEM6qCkjo0CUkwPY2cWTIBb47nj7MW5q4BYv4yT/E3PGrnGgG3HYDN61sRBVlPWD5PX7hhmCpgWmly7cdIL/LoQNre73ZkGOsN0LY4H+PEDAQ6e1UmZ/DKex/VfWjEwMFphEth0BUeTFgI3/z7cUChIlqwcbMoAuhHMiSM94ZLcp4EFTi1LN094pazqIRIFTXOUQ6pjk5oYdznUiyQYat40FMU=) 2025-09-17 15:39:10.252703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK+Uxq+qW50Pp1G5v20V9wBbqj6y9VAi1TSUK9mjeNfh) 2025-09-17 15:39:10.252714 | orchestrator | 2025-09-17 15:39:10.252725 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-17 15:39:10.252736 | orchestrator | Wednesday 17 September 2025 15:39:09 +0000 (0:00:01.001) 0:00:26.185 *** 2025-09-17 15:39:10.252747 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-17 15:39:10.252758 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 15:39:10.252769 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-17 15:39:10.252787 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-17 15:39:10.252798 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-17 15:39:10.252809 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-17 15:39:10.252819 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-17 15:39:10.252830 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:39:10.252841 | orchestrator | 2025-09-17 15:39:10.252871 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-17 15:39:10.252886 | orchestrator | Wednesday 17 September 2025 15:39:09 +0000 (0:00:00.154) 0:00:26.339 *** 2025-09-17 15:39:10.252898 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:39:10.252910 | orchestrator | 2025-09-17 15:39:10.252940 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-17 15:39:10.252953 | orchestrator | Wednesday 17 September 2025 15:39:09 +0000 (0:00:00.065) 0:00:26.405 *** 2025-09-17 15:39:10.252965 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:39:10.252977 | orchestrator | 2025-09-17 15:39:10.252989 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-17 15:39:10.253002 | orchestrator | Wednesday 17 September 2025 15:39:09 +0000 (0:00:00.059) 0:00:26.464 *** 2025-09-17 15:39:10.253014 | orchestrator | changed: [testbed-manager] 2025-09-17 15:39:10.253024 | orchestrator | 2025-09-17 15:39:10.253035 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:39:10.253046 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 15:39:10.253058 | orchestrator | 2025-09-17 15:39:10.253069 | orchestrator | 2025-09-17 15:39:10.253080 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:39:10.253091 | orchestrator | Wednesday 17 September 2025 15:39:09 +0000 (0:00:00.471) 0:00:26.936 *** 2025-09-17 15:39:10.253101 | orchestrator | =============================================================================== 2025-09-17 15:39:10.253112 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.52s 2025-09-17 15:39:10.253123 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.98s 2025-09-17 15:39:10.253134 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.14s 2025-09-17 15:39:10.253145 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 15:39:10.253155 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 15:39:10.253166 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 15:39:10.253176 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-17 15:39:10.253187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-17 15:39:10.253198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-17 15:39:10.253208 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-17 15:39:10.253219 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-17 15:39:10.253230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-17 15:39:10.253240 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-17 15:39:10.253251 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-17 15:39:10.253261 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-09-17 15:39:10.253272 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-09-17 15:39:10.253282 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.47s 2025-09-17 15:39:10.253293 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-17 15:39:10.253311 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-17 15:39:10.253322 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-17 15:39:10.508797 | orchestrator | + osism apply squid 2025-09-17 15:39:22.384961 | orchestrator | 2025-09-17 15:39:22 | INFO  | Task 1f75e2c4-8131-4f66-acc4-239f7f902c92 (squid) was prepared for execution. 2025-09-17 15:39:22.385078 | orchestrator | 2025-09-17 15:39:22 | INFO  | It takes a moment until task 1f75e2c4-8131-4f66-acc4-239f7f902c92 (squid) has been started and output is visible here. 2025-09-17 15:41:14.665959 | orchestrator | 2025-09-17 15:41:14.666141 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-17 15:41:14.666160 | orchestrator | 2025-09-17 15:41:14.666173 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-17 15:41:14.666184 | orchestrator | Wednesday 17 September 2025 15:39:25 +0000 (0:00:00.149) 0:00:00.149 *** 2025-09-17 15:41:14.666195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 15:41:14.666207 | orchestrator | 2025-09-17 15:41:14.666218 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-17 15:41:14.666228 | orchestrator | Wednesday 17 September 2025 15:39:25 +0000 (0:00:00.073) 0:00:00.222 *** 2025-09-17 15:41:14.666239 | orchestrator | ok: [testbed-manager] 2025-09-17 15:41:14.666251 | orchestrator | 2025-09-17 15:41:14.666262 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-17 15:41:14.666272 | orchestrator | Wednesday 17 September 2025 15:39:27 +0000 (0:00:01.310) 0:00:01.533 *** 2025-09-17 15:41:14.666283 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-17 15:41:14.666294 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-17 15:41:14.666305 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-17 15:41:14.666316 | orchestrator | 2025-09-17 15:41:14.666327 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-17 15:41:14.666337 | orchestrator | Wednesday 17 September 2025 15:39:28 +0000 (0:00:01.110) 0:00:02.643 *** 2025-09-17 15:41:14.666348 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-17 15:41:14.666358 | orchestrator | 2025-09-17 15:41:14.666369 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-17 15:41:14.666380 | orchestrator | Wednesday 17 September 2025 15:39:29 +0000 (0:00:01.002) 0:00:03.646 *** 2025-09-17 15:41:14.666391 | orchestrator | ok: [testbed-manager] 2025-09-17 15:41:14.666401 | orchestrator | 2025-09-17 15:41:14.666412 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-17 15:41:14.666423 | orchestrator | Wednesday 17 September 2025 15:39:29 +0000 (0:00:00.341) 0:00:03.987 *** 2025-09-17 15:41:14.666433 | orchestrator | changed: [testbed-manager] 2025-09-17 15:41:14.666444 | orchestrator | 2025-09-17 15:41:14.666455 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-17 15:41:14.666485 | orchestrator | Wednesday 17 September 2025 15:39:30 +0000 (0:00:00.894) 0:00:04.882 *** 2025-09-17 15:41:14.666498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-17 15:41:14.666543 | orchestrator | ok: [testbed-manager] 2025-09-17 15:41:14.666562 | orchestrator | 2025-09-17 15:41:14.666582 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-17 15:41:14.666602 | orchestrator | Wednesday 17 September 2025 15:40:01 +0000 (0:00:30.978) 0:00:35.860 *** 2025-09-17 15:41:14.666615 | orchestrator | changed: [testbed-manager] 2025-09-17 15:41:14.666626 | orchestrator | 2025-09-17 15:41:14.666639 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-17 15:41:14.666650 | orchestrator | Wednesday 17 September 2025 15:40:13 +0000 (0:00:12.006) 0:00:47.867 *** 2025-09-17 15:41:14.666663 | orchestrator | Pausing for 60 seconds 2025-09-17 15:41:14.666697 | orchestrator | changed: [testbed-manager] 2025-09-17 15:41:14.666710 | orchestrator | 2025-09-17 15:41:14.666722 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-17 15:41:14.666734 | orchestrator | Wednesday 17 September 2025 15:41:13 +0000 (0:01:00.078) 0:01:47.945 *** 2025-09-17 15:41:14.666746 | orchestrator | ok: [testbed-manager] 2025-09-17 15:41:14.666758 | orchestrator | 2025-09-17 15:41:14.666771 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-17 15:41:14.666790 | orchestrator | Wednesday 17 September 2025 15:41:13 +0000 (0:00:00.064) 0:01:48.010 *** 2025-09-17 15:41:14.666810 | orchestrator | changed: [testbed-manager] 2025-09-17 15:41:14.666829 | orchestrator | 2025-09-17 15:41:14.666848 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:41:14.666863 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:41:14.666874 | orchestrator | 2025-09-17 15:41:14.666885 | orchestrator | 2025-09-17 15:41:14.666896 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:41:14.666907 | orchestrator | Wednesday 17 September 2025 15:41:14 +0000 (0:00:00.655) 0:01:48.665 *** 2025-09-17 15:41:14.666917 | orchestrator | =============================================================================== 2025-09-17 15:41:14.666928 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-17 15:41:14.666938 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.98s 2025-09-17 15:41:14.666949 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.01s 2025-09-17 15:41:14.666960 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.31s 2025-09-17 15:41:14.666971 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.11s 2025-09-17 15:41:14.666981 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2025-09-17 15:41:14.666992 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2025-09-17 15:41:14.667003 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-17 15:41:14.667014 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-09-17 15:41:14.667024 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-09-17 15:41:14.667035 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-17 15:41:14.902227 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-17 15:41:14.902293 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-09-17 15:41:14.906476 | orchestrator | ++ semver 9.2.0 9.0.0 2025-09-17 15:41:14.968657 | orchestrator | + [[ 1 -lt 0 ]] 2025-09-17 15:41:14.968984 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-17 15:41:26.894332 | orchestrator | 2025-09-17 15:41:26 | INFO  | Task c643ff2f-ec9a-414f-84e3-448e16af439e (operator) was prepared for execution. 2025-09-17 15:41:26.894445 | orchestrator | 2025-09-17 15:41:26 | INFO  | It takes a moment until task c643ff2f-ec9a-414f-84e3-448e16af439e (operator) has been started and output is visible here. 2025-09-17 15:41:42.389923 | orchestrator | 2025-09-17 15:41:42.390095 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-17 15:41:42.390114 | orchestrator | 2025-09-17 15:41:42.390126 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 15:41:42.390137 | orchestrator | Wednesday 17 September 2025 15:41:30 +0000 (0:00:00.155) 0:00:00.155 *** 2025-09-17 15:41:42.390148 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:41:42.390160 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:41:42.390171 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:41:42.390182 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:41:42.390192 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:41:42.390227 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:41:42.390239 | orchestrator | 2025-09-17 15:41:42.390250 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-17 15:41:42.390261 | orchestrator | Wednesday 17 September 2025 15:41:34 +0000 (0:00:03.537) 0:00:03.693 *** 2025-09-17 15:41:42.390272 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:41:42.390283 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:41:42.390293 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:41:42.390304 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:41:42.390315 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:41:42.390325 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:41:42.390336 | orchestrator | 2025-09-17 15:41:42.390347 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-17 15:41:42.390358 | orchestrator | 2025-09-17 15:41:42.390369 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-17 15:41:42.390380 | orchestrator | Wednesday 17 September 2025 15:41:35 +0000 (0:00:00.729) 0:00:04.422 *** 2025-09-17 15:41:42.390390 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:41:42.390401 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:41:42.390411 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:41:42.390422 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:41:42.390432 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:41:42.390443 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:41:42.390453 | orchestrator | 2025-09-17 15:41:42.390464 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-17 15:41:42.390475 | orchestrator | Wednesday 17 September 2025 15:41:35 +0000 (0:00:00.184) 0:00:04.606 *** 2025-09-17 15:41:42.390486 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:41:42.390496 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:41:42.390529 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:41:42.390540 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:41:42.390551 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:41:42.390562 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:41:42.390572 | orchestrator | 2025-09-17 15:41:42.390583 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-17 15:41:42.390594 | orchestrator | Wednesday 17 September 2025 15:41:35 +0000 (0:00:00.185) 0:00:04.792 *** 2025-09-17 15:41:42.390605 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:42.390616 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:42.390627 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:42.390638 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:42.390648 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:42.390659 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:42.390670 | orchestrator | 2025-09-17 15:41:42.390680 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-17 15:41:42.390691 | orchestrator | Wednesday 17 September 2025 15:41:35 +0000 (0:00:00.598) 0:00:05.391 *** 2025-09-17 15:41:42.390702 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:42.390712 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:42.390723 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:42.390733 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:42.390744 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:42.390754 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:42.390765 | orchestrator | 2025-09-17 15:41:42.390776 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-17 15:41:42.390786 | orchestrator | Wednesday 17 September 2025 15:41:36 +0000 (0:00:00.754) 0:00:06.145 *** 2025-09-17 15:41:42.390797 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-17 15:41:42.390808 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-17 15:41:42.390819 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-17 15:41:42.390830 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-17 15:41:42.390840 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-17 15:41:42.390851 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-17 15:41:42.390869 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-17 15:41:42.390880 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-17 15:41:42.390891 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-17 15:41:42.390902 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-17 15:41:42.390912 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-17 15:41:42.390923 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-17 15:41:42.390934 | orchestrator | 2025-09-17 15:41:42.390949 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-17 15:41:42.390960 | orchestrator | Wednesday 17 September 2025 15:41:37 +0000 (0:00:01.126) 0:00:07.272 *** 2025-09-17 15:41:42.390971 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:42.390981 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:42.390992 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:42.391002 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:42.391013 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:42.391023 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:42.391034 | orchestrator | 2025-09-17 15:41:42.391045 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-17 15:41:42.391056 | orchestrator | Wednesday 17 September 2025 15:41:39 +0000 (0:00:01.184) 0:00:08.456 *** 2025-09-17 15:41:42.391067 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-17 15:41:42.391078 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-17 15:41:42.391089 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-17 15:41:42.391100 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391127 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391138 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391149 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391159 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391170 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-17 15:41:42.391181 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391191 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391218 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391230 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391241 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391251 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-17 15:41:42.391262 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391277 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391288 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391299 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391309 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391320 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-17 15:41:42.391330 | orchestrator | 2025-09-17 15:41:42.391341 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-17 15:41:42.391352 | orchestrator | Wednesday 17 September 2025 15:41:40 +0000 (0:00:01.365) 0:00:09.822 *** 2025-09-17 15:41:42.391363 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:42.391374 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:42.391384 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:42.391395 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:42.391405 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:42.391423 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:42.391434 | orchestrator | 2025-09-17 15:41:42.391444 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-17 15:41:42.391455 | orchestrator | Wednesday 17 September 2025 15:41:40 +0000 (0:00:00.150) 0:00:09.973 *** 2025-09-17 15:41:42.391465 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:42.391476 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:42.391486 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:42.391497 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:42.391523 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:42.391533 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:42.391544 | orchestrator | 2025-09-17 15:41:42.391554 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-17 15:41:42.391565 | orchestrator | Wednesday 17 September 2025 15:41:41 +0000 (0:00:00.544) 0:00:10.517 *** 2025-09-17 15:41:42.391576 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:42.391586 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:42.391596 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:42.391607 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:42.391617 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:42.391627 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:42.391638 | orchestrator | 2025-09-17 15:41:42.391648 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-17 15:41:42.391659 | orchestrator | Wednesday 17 September 2025 15:41:41 +0000 (0:00:00.160) 0:00:10.678 *** 2025-09-17 15:41:42.391669 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 15:41:42.391680 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:42.391690 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 15:41:42.391701 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:42.391711 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 15:41:42.391721 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:42.391732 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 15:41:42.391742 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-17 15:41:42.391753 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:42.391763 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:42.391773 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-17 15:41:42.391784 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:42.391794 | orchestrator | 2025-09-17 15:41:42.391805 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-17 15:41:42.391815 | orchestrator | Wednesday 17 September 2025 15:41:41 +0000 (0:00:00.670) 0:00:11.348 *** 2025-09-17 15:41:42.391826 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:42.391836 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:42.391847 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:42.391857 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:42.391868 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:42.391878 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:42.391889 | orchestrator | 2025-09-17 15:41:42.391899 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-17 15:41:42.391910 | orchestrator | Wednesday 17 September 2025 15:41:42 +0000 (0:00:00.134) 0:00:11.482 *** 2025-09-17 15:41:42.391921 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:42.391931 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:42.391941 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:42.391952 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:42.391962 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:42.391972 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:42.391983 | orchestrator | 2025-09-17 15:41:42.391993 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-17 15:41:42.392004 | orchestrator | Wednesday 17 September 2025 15:41:42 +0000 (0:00:00.151) 0:00:11.634 *** 2025-09-17 15:41:42.392015 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:42.392032 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:42.392043 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:42.392054 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:42.392071 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:43.497959 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:43.498098 | orchestrator | 2025-09-17 15:41:43.498117 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-17 15:41:43.498131 | orchestrator | Wednesday 17 September 2025 15:41:42 +0000 (0:00:00.146) 0:00:11.781 *** 2025-09-17 15:41:43.498143 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:41:43.498154 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:41:43.498165 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:41:43.498176 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:41:43.498187 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:41:43.498197 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:41:43.498208 | orchestrator | 2025-09-17 15:41:43.498219 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-17 15:41:43.498230 | orchestrator | Wednesday 17 September 2025 15:41:43 +0000 (0:00:00.625) 0:00:12.406 *** 2025-09-17 15:41:43.498240 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:41:43.498251 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:41:43.498262 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:41:43.498272 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:41:43.498283 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:41:43.498293 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:41:43.498304 | orchestrator | 2025-09-17 15:41:43.498315 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:41:43.498326 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498339 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498350 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498360 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498371 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498382 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:41:43.498393 | orchestrator | 2025-09-17 15:41:43.498404 | orchestrator | 2025-09-17 15:41:43.498414 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:41:43.498425 | orchestrator | Wednesday 17 September 2025 15:41:43 +0000 (0:00:00.241) 0:00:12.648 *** 2025-09-17 15:41:43.498436 | orchestrator | =============================================================================== 2025-09-17 15:41:43.498448 | orchestrator | Gathering Facts --------------------------------------------------------- 3.54s 2025-09-17 15:41:43.498459 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2025-09-17 15:41:43.498471 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2025-09-17 15:41:43.498482 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2025-09-17 15:41:43.498493 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.75s 2025-09-17 15:41:43.498529 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-09-17 15:41:43.498542 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-09-17 15:41:43.498582 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-09-17 15:41:43.498595 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-09-17 15:41:43.498607 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-09-17 15:41:43.498619 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.24s 2025-09-17 15:41:43.498631 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-09-17 15:41:43.498644 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-09-17 15:41:43.498656 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-09-17 15:41:43.498668 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-09-17 15:41:43.498680 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-17 15:41:43.498693 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-17 15:41:43.498705 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-09-17 15:41:43.742762 | orchestrator | + osism apply --environment custom facts 2025-09-17 15:41:45.526817 | orchestrator | 2025-09-17 15:41:45 | INFO  | Trying to run play facts in environment custom 2025-09-17 15:41:55.629696 | orchestrator | 2025-09-17 15:41:55 | INFO  | Task f006a62e-33d6-4da2-aa7f-56c05bd5cbb4 (facts) was prepared for execution. 2025-09-17 15:41:55.629813 | orchestrator | 2025-09-17 15:41:55 | INFO  | It takes a moment until task f006a62e-33d6-4da2-aa7f-56c05bd5cbb4 (facts) has been started and output is visible here. 2025-09-17 15:42:38.576399 | orchestrator | 2025-09-17 15:42:38.576478 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-17 15:42:38.576486 | orchestrator | 2025-09-17 15:42:38.576491 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 15:42:38.576496 | orchestrator | Wednesday 17 September 2025 15:41:59 +0000 (0:00:00.083) 0:00:00.083 *** 2025-09-17 15:42:38.576501 | orchestrator | ok: [testbed-manager] 2025-09-17 15:42:38.576544 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:42:38.576551 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.576555 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:42:38.576560 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:42:38.576565 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.576570 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.576574 | orchestrator | 2025-09-17 15:42:38.576579 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-17 15:42:38.576598 | orchestrator | Wednesday 17 September 2025 15:42:00 +0000 (0:00:01.343) 0:00:01.426 *** 2025-09-17 15:42:38.576603 | orchestrator | ok: [testbed-manager] 2025-09-17 15:42:38.576608 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.576612 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:42:38.576616 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.576624 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.576629 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:42:38.576633 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:42:38.576637 | orchestrator | 2025-09-17 15:42:38.576641 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-17 15:42:38.576646 | orchestrator | 2025-09-17 15:42:38.576650 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 15:42:38.576654 | orchestrator | Wednesday 17 September 2025 15:42:01 +0000 (0:00:01.080) 0:00:02.507 *** 2025-09-17 15:42:38.576659 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.576663 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.576668 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.576672 | orchestrator | 2025-09-17 15:42:38.576676 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 15:42:38.576682 | orchestrator | Wednesday 17 September 2025 15:42:01 +0000 (0:00:00.102) 0:00:02.609 *** 2025-09-17 15:42:38.576705 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.576713 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.576720 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.576727 | orchestrator | 2025-09-17 15:42:38.576737 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 15:42:38.576747 | orchestrator | Wednesday 17 September 2025 15:42:02 +0000 (0:00:00.182) 0:00:02.791 *** 2025-09-17 15:42:38.576754 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.576760 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.576766 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.576772 | orchestrator | 2025-09-17 15:42:38.576779 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 15:42:38.576786 | orchestrator | Wednesday 17 September 2025 15:42:02 +0000 (0:00:00.170) 0:00:02.961 *** 2025-09-17 15:42:38.576794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:42:38.576801 | orchestrator | 2025-09-17 15:42:38.576807 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 15:42:38.576814 | orchestrator | Wednesday 17 September 2025 15:42:02 +0000 (0:00:00.116) 0:00:03.078 *** 2025-09-17 15:42:38.576821 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.576827 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.576833 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.576839 | orchestrator | 2025-09-17 15:42:38.576846 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 15:42:38.576852 | orchestrator | Wednesday 17 September 2025 15:42:02 +0000 (0:00:00.383) 0:00:03.461 *** 2025-09-17 15:42:38.576858 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:42:38.576864 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:42:38.576870 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:42:38.576878 | orchestrator | 2025-09-17 15:42:38.576885 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 15:42:38.576893 | orchestrator | Wednesday 17 September 2025 15:42:02 +0000 (0:00:00.113) 0:00:03.575 *** 2025-09-17 15:42:38.576901 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.576908 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.576914 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.576919 | orchestrator | 2025-09-17 15:42:38.576925 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 15:42:38.576932 | orchestrator | Wednesday 17 September 2025 15:42:03 +0000 (0:00:00.967) 0:00:04.542 *** 2025-09-17 15:42:38.576939 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.576946 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.576952 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.576959 | orchestrator | 2025-09-17 15:42:38.576966 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 15:42:38.576972 | orchestrator | Wednesday 17 September 2025 15:42:04 +0000 (0:00:00.472) 0:00:05.015 *** 2025-09-17 15:42:38.576978 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.576985 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.576991 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.576999 | orchestrator | 2025-09-17 15:42:38.577005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 15:42:38.577012 | orchestrator | Wednesday 17 September 2025 15:42:05 +0000 (0:00:01.022) 0:00:06.037 *** 2025-09-17 15:42:38.577019 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.577025 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.577033 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.577038 | orchestrator | 2025-09-17 15:42:38.577042 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-17 15:42:38.577046 | orchestrator | Wednesday 17 September 2025 15:42:22 +0000 (0:00:16.854) 0:00:22.892 *** 2025-09-17 15:42:38.577051 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:42:38.577062 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:42:38.577066 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:42:38.577070 | orchestrator | 2025-09-17 15:42:38.577075 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-17 15:42:38.577091 | orchestrator | Wednesday 17 September 2025 15:42:22 +0000 (0:00:00.087) 0:00:22.979 *** 2025-09-17 15:42:38.577095 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:42:38.577099 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:42:38.577104 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:42:38.577108 | orchestrator | 2025-09-17 15:42:38.577114 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-17 15:42:38.577121 | orchestrator | Wednesday 17 September 2025 15:42:30 +0000 (0:00:07.772) 0:00:30.752 *** 2025-09-17 15:42:38.577129 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.577136 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.577142 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.577149 | orchestrator | 2025-09-17 15:42:38.577158 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-17 15:42:38.577165 | orchestrator | Wednesday 17 September 2025 15:42:30 +0000 (0:00:00.376) 0:00:31.128 *** 2025-09-17 15:42:38.577171 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-17 15:42:38.577178 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-17 15:42:38.577190 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-17 15:42:38.577197 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-17 15:42:38.577205 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-17 15:42:38.577212 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-17 15:42:38.577220 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-17 15:42:38.577227 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-17 15:42:38.577235 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-17 15:42:38.577242 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-17 15:42:38.577250 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-17 15:42:38.577258 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-17 15:42:38.577265 | orchestrator | 2025-09-17 15:42:38.577272 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 15:42:38.577280 | orchestrator | Wednesday 17 September 2025 15:42:33 +0000 (0:00:03.202) 0:00:34.331 *** 2025-09-17 15:42:38.577287 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.577295 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.577306 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.577316 | orchestrator | 2025-09-17 15:42:38.577326 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 15:42:38.577336 | orchestrator | 2025-09-17 15:42:38.577347 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 15:42:38.577358 | orchestrator | Wednesday 17 September 2025 15:42:34 +0000 (0:00:01.022) 0:00:35.354 *** 2025-09-17 15:42:38.577367 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:42:38.577377 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:42:38.577387 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:42:38.577397 | orchestrator | ok: [testbed-manager] 2025-09-17 15:42:38.577406 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:42:38.577416 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:42:38.577426 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:42:38.577437 | orchestrator | 2025-09-17 15:42:38.577447 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:42:38.577458 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:42:38.577469 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:42:38.577489 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:42:38.577499 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:42:38.577528 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:42:38.577540 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:42:38.577550 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:42:38.577561 | orchestrator | 2025-09-17 15:42:38.577571 | orchestrator | 2025-09-17 15:42:38.577581 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:42:38.577592 | orchestrator | Wednesday 17 September 2025 15:42:38 +0000 (0:00:03.881) 0:00:39.235 *** 2025-09-17 15:42:38.577602 | orchestrator | =============================================================================== 2025-09-17 15:42:38.577613 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.85s 2025-09-17 15:42:38.577622 | orchestrator | Install required packages (Debian) -------------------------------------- 7.77s 2025-09-17 15:42:38.577631 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2025-09-17 15:42:38.577642 | orchestrator | Copy fact files --------------------------------------------------------- 3.20s 2025-09-17 15:42:38.577651 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2025-09-17 15:42:38.577661 | orchestrator | Copy fact file ---------------------------------------------------------- 1.08s 2025-09-17 15:42:38.577678 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.02s 2025-09-17 15:42:38.782691 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-09-17 15:42:38.782796 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2025-09-17 15:42:38.782811 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-17 15:42:38.782823 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.38s 2025-09-17 15:42:38.782833 | orchestrator | Create custom facts directory ------------------------------------------- 0.38s 2025-09-17 15:42:38.782844 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-09-17 15:42:38.782855 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-17 15:42:38.782865 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-09-17 15:42:38.782877 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-17 15:42:38.782888 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-17 15:42:38.782899 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-09-17 15:42:39.050623 | orchestrator | + osism apply bootstrap 2025-09-17 15:42:51.026100 | orchestrator | 2025-09-17 15:42:51 | INFO  | Task 523b7759-0bac-451f-9143-3e426835d528 (bootstrap) was prepared for execution. 2025-09-17 15:42:51.026188 | orchestrator | 2025-09-17 15:42:51 | INFO  | It takes a moment until task 523b7759-0bac-451f-9143-3e426835d528 (bootstrap) has been started and output is visible here. 2025-09-17 15:43:07.104432 | orchestrator | 2025-09-17 15:43:07.104635 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-17 15:43:07.104666 | orchestrator | 2025-09-17 15:43:07.104687 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-17 15:43:07.104743 | orchestrator | Wednesday 17 September 2025 15:42:55 +0000 (0:00:00.185) 0:00:00.185 *** 2025-09-17 15:43:07.104764 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:07.104783 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:07.104802 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:07.104819 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:07.104838 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:07.104857 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:07.104877 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:07.104895 | orchestrator | 2025-09-17 15:43:07.104930 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 15:43:07.104944 | orchestrator | 2025-09-17 15:43:07.104957 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 15:43:07.104970 | orchestrator | Wednesday 17 September 2025 15:42:55 +0000 (0:00:00.278) 0:00:00.463 *** 2025-09-17 15:43:07.104982 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:07.104995 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:07.105008 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:07.105020 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:07.105033 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:07.105045 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:07.105057 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:07.105069 | orchestrator | 2025-09-17 15:43:07.105082 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-17 15:43:07.105094 | orchestrator | 2025-09-17 15:43:07.105107 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 15:43:07.105119 | orchestrator | Wednesday 17 September 2025 15:42:59 +0000 (0:00:03.492) 0:00:03.956 *** 2025-09-17 15:43:07.105133 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-17 15:43:07.105146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-17 15:43:07.105157 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 15:43:07.105170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 15:43:07.105182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 15:43:07.105195 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-17 15:43:07.105207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-17 15:43:07.105219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 15:43:07.105232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-17 15:43:07.105244 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-17 15:43:07.105257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-17 15:43:07.105270 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-17 15:43:07.105282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-17 15:43:07.105295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-17 15:43:07.105332 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-17 15:43:07.105368 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-17 15:43:07.105391 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-17 15:43:07.105414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-17 15:43:07.105426 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-17 15:43:07.105436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-17 15:43:07.105447 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-17 15:43:07.105457 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:07.105468 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-17 15:43:07.105479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-17 15:43:07.105489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 15:43:07.105578 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-17 15:43:07.105592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-17 15:43:07.105603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-17 15:43:07.105614 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:07.105626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 15:43:07.105636 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-17 15:43:07.105646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 15:43:07.105656 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-17 15:43:07.105666 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-17 15:43:07.105676 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:07.105685 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 15:43:07.105700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-17 15:43:07.105717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 15:43:07.105740 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 15:43:07.105759 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-17 15:43:07.105775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 15:43:07.105792 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:07.105807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 15:43:07.105824 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-17 15:43:07.105840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 15:43:07.105857 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-17 15:43:07.105897 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 15:43:07.105914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 15:43:07.105932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-17 15:43:07.105948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-17 15:43:07.105965 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:07.105981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-17 15:43:07.105995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 15:43:07.106005 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:07.106081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-17 15:43:07.106096 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:07.106109 | orchestrator | 2025-09-17 15:43:07.106124 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-17 15:43:07.106141 | orchestrator | 2025-09-17 15:43:07.106157 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-17 15:43:07.106173 | orchestrator | Wednesday 17 September 2025 15:42:59 +0000 (0:00:00.435) 0:00:04.391 *** 2025-09-17 15:43:07.106190 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:07.106205 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:07.106221 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:07.106238 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:07.106253 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:07.106263 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:07.106273 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:07.106282 | orchestrator | 2025-09-17 15:43:07.106295 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-17 15:43:07.106310 | orchestrator | Wednesday 17 September 2025 15:43:01 +0000 (0:00:01.231) 0:00:05.622 *** 2025-09-17 15:43:07.106327 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:07.106343 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:07.106359 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:07.106375 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:07.106392 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:07.106411 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:07.106421 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:07.106430 | orchestrator | 2025-09-17 15:43:07.106441 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-17 15:43:07.106457 | orchestrator | Wednesday 17 September 2025 15:43:02 +0000 (0:00:01.208) 0:00:06.831 *** 2025-09-17 15:43:07.106475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:07.106494 | orchestrator | 2025-09-17 15:43:07.106538 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-17 15:43:07.106556 | orchestrator | Wednesday 17 September 2025 15:43:02 +0000 (0:00:00.287) 0:00:07.119 *** 2025-09-17 15:43:07.106573 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:07.106589 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:07.106605 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:07.106621 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:07.106637 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:07.106653 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:07.106663 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:07.106673 | orchestrator | 2025-09-17 15:43:07.106682 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-17 15:43:07.106692 | orchestrator | Wednesday 17 September 2025 15:43:04 +0000 (0:00:02.087) 0:00:09.206 *** 2025-09-17 15:43:07.106701 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:07.106712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:07.106724 | orchestrator | 2025-09-17 15:43:07.106734 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-17 15:43:07.106743 | orchestrator | Wednesday 17 September 2025 15:43:04 +0000 (0:00:00.235) 0:00:09.442 *** 2025-09-17 15:43:07.106753 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:07.106763 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:07.106772 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:07.106782 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:07.106791 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:07.106800 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:07.106810 | orchestrator | 2025-09-17 15:43:07.106819 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-17 15:43:07.106829 | orchestrator | Wednesday 17 September 2025 15:43:05 +0000 (0:00:00.942) 0:00:10.385 *** 2025-09-17 15:43:07.106839 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:07.106848 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:07.106858 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:07.106867 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:07.106876 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:07.106886 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:07.106895 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:07.106905 | orchestrator | 2025-09-17 15:43:07.106915 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-17 15:43:07.106924 | orchestrator | Wednesday 17 September 2025 15:43:06 +0000 (0:00:00.617) 0:00:11.003 *** 2025-09-17 15:43:07.106934 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:07.106943 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:07.106953 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:07.106962 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:07.106972 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:07.106981 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:07.106991 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:07.107000 | orchestrator | 2025-09-17 15:43:07.107010 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-17 15:43:07.107029 | orchestrator | Wednesday 17 September 2025 15:43:06 +0000 (0:00:00.411) 0:00:11.414 *** 2025-09-17 15:43:07.107039 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:07.107048 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:07.107067 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:19.311371 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:19.311497 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:19.311546 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:19.311557 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:19.311566 | orchestrator | 2025-09-17 15:43:19.311576 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-17 15:43:19.311586 | orchestrator | Wednesday 17 September 2025 15:43:07 +0000 (0:00:00.236) 0:00:11.651 *** 2025-09-17 15:43:19.311597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:19.311622 | orchestrator | 2025-09-17 15:43:19.311631 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-17 15:43:19.311641 | orchestrator | Wednesday 17 September 2025 15:43:07 +0000 (0:00:00.284) 0:00:11.935 *** 2025-09-17 15:43:19.311650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:19.311661 | orchestrator | 2025-09-17 15:43:19.311676 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-17 15:43:19.311690 | orchestrator | Wednesday 17 September 2025 15:43:07 +0000 (0:00:00.324) 0:00:12.259 *** 2025-09-17 15:43:19.311704 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.311721 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.311735 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.311749 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.311764 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.311779 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.311794 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.311808 | orchestrator | 2025-09-17 15:43:19.311822 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-17 15:43:19.311838 | orchestrator | Wednesday 17 September 2025 15:43:08 +0000 (0:00:01.101) 0:00:13.360 *** 2025-09-17 15:43:19.311853 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:19.311867 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:19.311875 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:19.311884 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:19.311893 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:19.311902 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:19.311912 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:19.311922 | orchestrator | 2025-09-17 15:43:19.311932 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-17 15:43:19.311942 | orchestrator | Wednesday 17 September 2025 15:43:09 +0000 (0:00:00.230) 0:00:13.591 *** 2025-09-17 15:43:19.311952 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.311961 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.311971 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.311979 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.311987 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.311996 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.312004 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.312013 | orchestrator | 2025-09-17 15:43:19.312021 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-17 15:43:19.312030 | orchestrator | Wednesday 17 September 2025 15:43:09 +0000 (0:00:00.577) 0:00:14.168 *** 2025-09-17 15:43:19.312038 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:19.312070 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:19.312079 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:19.312087 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:19.312096 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:19.312104 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:19.312112 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:19.312120 | orchestrator | 2025-09-17 15:43:19.312129 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-17 15:43:19.312139 | orchestrator | Wednesday 17 September 2025 15:43:09 +0000 (0:00:00.261) 0:00:14.430 *** 2025-09-17 15:43:19.312147 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312156 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:19.312203 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:19.312212 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:19.312221 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:19.312229 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:19.312238 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:19.312246 | orchestrator | 2025-09-17 15:43:19.312255 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-17 15:43:19.312263 | orchestrator | Wednesday 17 September 2025 15:43:10 +0000 (0:00:00.607) 0:00:15.037 *** 2025-09-17 15:43:19.312272 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312280 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:19.312288 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:19.312297 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:19.312306 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:19.312314 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:19.312322 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:19.312330 | orchestrator | 2025-09-17 15:43:19.312339 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-17 15:43:19.312352 | orchestrator | Wednesday 17 September 2025 15:43:11 +0000 (0:00:01.103) 0:00:16.140 *** 2025-09-17 15:43:19.312360 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312369 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.312377 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.312387 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.312401 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.312415 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.312430 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.312444 | orchestrator | 2025-09-17 15:43:19.312459 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-17 15:43:19.312474 | orchestrator | Wednesday 17 September 2025 15:43:12 +0000 (0:00:01.269) 0:00:17.409 *** 2025-09-17 15:43:19.312547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:19.312564 | orchestrator | 2025-09-17 15:43:19.312572 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-17 15:43:19.312581 | orchestrator | Wednesday 17 September 2025 15:43:13 +0000 (0:00:00.414) 0:00:17.824 *** 2025-09-17 15:43:19.312590 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:19.312598 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:19.312606 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:19.312615 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:19.312623 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:19.312632 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:19.312640 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:19.312649 | orchestrator | 2025-09-17 15:43:19.312657 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-17 15:43:19.312666 | orchestrator | Wednesday 17 September 2025 15:43:14 +0000 (0:00:01.267) 0:00:19.091 *** 2025-09-17 15:43:19.312674 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312693 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.312702 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.312710 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.312718 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.312727 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.312735 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.312744 | orchestrator | 2025-09-17 15:43:19.312752 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-17 15:43:19.312761 | orchestrator | Wednesday 17 September 2025 15:43:14 +0000 (0:00:00.254) 0:00:19.345 *** 2025-09-17 15:43:19.312769 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312778 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.312786 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.312795 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.312803 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.312811 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.312819 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.312828 | orchestrator | 2025-09-17 15:43:19.312836 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-17 15:43:19.312845 | orchestrator | Wednesday 17 September 2025 15:43:15 +0000 (0:00:00.256) 0:00:19.601 *** 2025-09-17 15:43:19.312853 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312862 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.312870 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.312878 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.312887 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.312895 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.312903 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.312911 | orchestrator | 2025-09-17 15:43:19.312920 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-17 15:43:19.312929 | orchestrator | Wednesday 17 September 2025 15:43:15 +0000 (0:00:00.245) 0:00:19.847 *** 2025-09-17 15:43:19.312938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:19.312949 | orchestrator | 2025-09-17 15:43:19.312957 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-17 15:43:19.312966 | orchestrator | Wednesday 17 September 2025 15:43:15 +0000 (0:00:00.283) 0:00:20.131 *** 2025-09-17 15:43:19.312974 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.312982 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.312991 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.312999 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.313007 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.313015 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.313024 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.313032 | orchestrator | 2025-09-17 15:43:19.313040 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-17 15:43:19.313049 | orchestrator | Wednesday 17 September 2025 15:43:16 +0000 (0:00:00.551) 0:00:20.682 *** 2025-09-17 15:43:19.313057 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:19.313066 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:19.313074 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:19.313082 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:19.313091 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:19.313099 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:19.313107 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:19.313116 | orchestrator | 2025-09-17 15:43:19.313124 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-17 15:43:19.313133 | orchestrator | Wednesday 17 September 2025 15:43:16 +0000 (0:00:00.256) 0:00:20.939 *** 2025-09-17 15:43:19.313141 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.313150 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:19.313158 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:19.313173 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:19.313181 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.313190 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.313198 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.313207 | orchestrator | 2025-09-17 15:43:19.313215 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-17 15:43:19.313223 | orchestrator | Wednesday 17 September 2025 15:43:17 +0000 (0:00:01.182) 0:00:22.122 *** 2025-09-17 15:43:19.313238 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.313246 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:19.313255 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:19.313263 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:19.313271 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.313280 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:19.313288 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:19.313296 | orchestrator | 2025-09-17 15:43:19.313305 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-17 15:43:19.313314 | orchestrator | Wednesday 17 September 2025 15:43:18 +0000 (0:00:00.569) 0:00:22.691 *** 2025-09-17 15:43:19.313322 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:19.313331 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:19.313339 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:19.313347 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:19.313362 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730114 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.730224 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730239 | orchestrator | 2025-09-17 15:43:59.730251 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-17 15:43:59.730263 | orchestrator | Wednesday 17 September 2025 15:43:19 +0000 (0:00:01.068) 0:00:23.760 *** 2025-09-17 15:43:59.730273 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.730284 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730294 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730304 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:59.730313 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:59.730323 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:59.730333 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.730342 | orchestrator | 2025-09-17 15:43:59.730352 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-17 15:43:59.730362 | orchestrator | Wednesday 17 September 2025 15:43:37 +0000 (0:00:18.033) 0:00:41.794 *** 2025-09-17 15:43:59.730372 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.730381 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.730391 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.730401 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.730410 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730420 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.730445 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730455 | orchestrator | 2025-09-17 15:43:59.730465 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-17 15:43:59.730475 | orchestrator | Wednesday 17 September 2025 15:43:37 +0000 (0:00:00.203) 0:00:41.997 *** 2025-09-17 15:43:59.730484 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.730494 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.730504 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.730562 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.730573 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730583 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.730594 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730604 | orchestrator | 2025-09-17 15:43:59.730616 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-17 15:43:59.730627 | orchestrator | Wednesday 17 September 2025 15:43:37 +0000 (0:00:00.194) 0:00:42.192 *** 2025-09-17 15:43:59.730638 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.730649 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.730660 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.730695 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.730707 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730718 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.730729 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730740 | orchestrator | 2025-09-17 15:43:59.730751 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-17 15:43:59.730762 | orchestrator | Wednesday 17 September 2025 15:43:37 +0000 (0:00:00.209) 0:00:42.402 *** 2025-09-17 15:43:59.730773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:59.730787 | orchestrator | 2025-09-17 15:43:59.730796 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-17 15:43:59.730806 | orchestrator | Wednesday 17 September 2025 15:43:38 +0000 (0:00:00.282) 0:00:42.684 *** 2025-09-17 15:43:59.730816 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.730825 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.730835 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.730845 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.730854 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.730864 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.730873 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.730883 | orchestrator | 2025-09-17 15:43:59.730893 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-17 15:43:59.730903 | orchestrator | Wednesday 17 September 2025 15:43:39 +0000 (0:00:01.541) 0:00:44.226 *** 2025-09-17 15:43:59.730912 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:59.730922 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:59.730931 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:59.730941 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.730950 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:59.730960 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:59.730969 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:59.730979 | orchestrator | 2025-09-17 15:43:59.730989 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-17 15:43:59.730998 | orchestrator | Wednesday 17 September 2025 15:43:40 +0000 (0:00:01.123) 0:00:45.349 *** 2025-09-17 15:43:59.731008 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731017 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.731027 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731036 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731046 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731055 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731065 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731074 | orchestrator | 2025-09-17 15:43:59.731084 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-17 15:43:59.731094 | orchestrator | Wednesday 17 September 2025 15:43:41 +0000 (0:00:00.825) 0:00:46.175 *** 2025-09-17 15:43:59.731105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:59.731117 | orchestrator | 2025-09-17 15:43:59.731127 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-17 15:43:59.731137 | orchestrator | Wednesday 17 September 2025 15:43:41 +0000 (0:00:00.283) 0:00:46.458 *** 2025-09-17 15:43:59.731147 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:59.731156 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:59.731166 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:59.731175 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.731185 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:59.731194 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:59.731204 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:59.731213 | orchestrator | 2025-09-17 15:43:59.731248 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-17 15:43:59.731259 | orchestrator | Wednesday 17 September 2025 15:43:42 +0000 (0:00:01.004) 0:00:47.463 *** 2025-09-17 15:43:59.731268 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:43:59.731278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:43:59.731287 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:43:59.731297 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:43:59.731306 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:43:59.731316 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:43:59.731325 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:43:59.731334 | orchestrator | 2025-09-17 15:43:59.731344 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-17 15:43:59.731354 | orchestrator | Wednesday 17 September 2025 15:43:43 +0000 (0:00:00.286) 0:00:47.750 *** 2025-09-17 15:43:59.731363 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:59.731373 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:59.731382 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:59.731392 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:59.731401 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.731410 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:59.731420 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:59.731429 | orchestrator | 2025-09-17 15:43:59.731439 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-17 15:43:59.731448 | orchestrator | Wednesday 17 September 2025 15:43:54 +0000 (0:00:11.475) 0:00:59.225 *** 2025-09-17 15:43:59.731458 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731467 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731477 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731486 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.731496 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731505 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731534 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731544 | orchestrator | 2025-09-17 15:43:59.731556 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-17 15:43:59.731571 | orchestrator | Wednesday 17 September 2025 15:43:55 +0000 (0:00:00.888) 0:01:00.113 *** 2025-09-17 15:43:59.731587 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.731598 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731607 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731617 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731626 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731636 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731645 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731654 | orchestrator | 2025-09-17 15:43:59.731664 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-17 15:43:59.731674 | orchestrator | Wednesday 17 September 2025 15:43:56 +0000 (0:00:00.905) 0:01:01.019 *** 2025-09-17 15:43:59.731683 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731693 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.731702 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731712 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731721 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731730 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731740 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731749 | orchestrator | 2025-09-17 15:43:59.731759 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-17 15:43:59.731769 | orchestrator | Wednesday 17 September 2025 15:43:56 +0000 (0:00:00.237) 0:01:01.257 *** 2025-09-17 15:43:59.731779 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731788 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.731798 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731807 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731816 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731825 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731835 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731852 | orchestrator | 2025-09-17 15:43:59.731879 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-17 15:43:59.731889 | orchestrator | Wednesday 17 September 2025 15:43:56 +0000 (0:00:00.208) 0:01:01.466 *** 2025-09-17 15:43:59.731899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:43:59.731909 | orchestrator | 2025-09-17 15:43:59.731918 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-17 15:43:59.731927 | orchestrator | Wednesday 17 September 2025 15:43:57 +0000 (0:00:00.291) 0:01:01.757 *** 2025-09-17 15:43:59.731937 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.731946 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.731956 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.731965 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.731975 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.731984 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.731993 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.732002 | orchestrator | 2025-09-17 15:43:59.732012 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-17 15:43:59.732021 | orchestrator | Wednesday 17 September 2025 15:43:58 +0000 (0:00:01.606) 0:01:03.363 *** 2025-09-17 15:43:59.732031 | orchestrator | changed: [testbed-manager] 2025-09-17 15:43:59.732040 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:43:59.732050 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:43:59.732059 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:43:59.732073 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:43:59.732082 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:43:59.732092 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:43:59.732101 | orchestrator | 2025-09-17 15:43:59.732111 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-17 15:43:59.732120 | orchestrator | Wednesday 17 September 2025 15:43:59 +0000 (0:00:00.597) 0:01:03.961 *** 2025-09-17 15:43:59.732130 | orchestrator | ok: [testbed-manager] 2025-09-17 15:43:59.732139 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:43:59.732149 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:43:59.732158 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:43:59.732167 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:43:59.732177 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:43:59.732186 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:43:59.732195 | orchestrator | 2025-09-17 15:43:59.732213 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-17 15:46:19.538916 | orchestrator | Wednesday 17 September 2025 15:43:59 +0000 (0:00:00.225) 0:01:04.187 *** 2025-09-17 15:46:19.539024 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:19.539041 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:19.539053 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:19.539064 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:19.539075 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:19.539086 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:19.539097 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:19.539108 | orchestrator | 2025-09-17 15:46:19.539120 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-17 15:46:19.539132 | orchestrator | Wednesday 17 September 2025 15:44:00 +0000 (0:00:01.142) 0:01:05.329 *** 2025-09-17 15:46:19.539143 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:19.539155 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:19.539166 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:19.539176 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:19.539187 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:19.539198 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:19.539208 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:19.539219 | orchestrator | 2025-09-17 15:46:19.539230 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-17 15:46:19.539264 | orchestrator | Wednesday 17 September 2025 15:44:02 +0000 (0:00:01.886) 0:01:07.216 *** 2025-09-17 15:46:19.539276 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:19.539286 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:19.539297 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:19.539308 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:19.539319 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:19.539329 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:19.539340 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:19.539351 | orchestrator | 2025-09-17 15:46:19.539362 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-17 15:46:19.539372 | orchestrator | Wednesday 17 September 2025 15:44:05 +0000 (0:00:02.611) 0:01:09.827 *** 2025-09-17 15:46:19.539383 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:19.539394 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:19.539405 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:19.539415 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:19.539425 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:19.539436 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:19.539446 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:19.539457 | orchestrator | 2025-09-17 15:46:19.539470 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-17 15:46:19.539483 | orchestrator | Wednesday 17 September 2025 15:44:42 +0000 (0:00:37.058) 0:01:46.886 *** 2025-09-17 15:46:19.539496 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:19.539508 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:19.539520 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:19.539557 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:19.539569 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:19.539582 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:19.539594 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:19.539606 | orchestrator | 2025-09-17 15:46:19.539619 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-17 15:46:19.539632 | orchestrator | Wednesday 17 September 2025 15:45:59 +0000 (0:01:16.788) 0:03:03.674 *** 2025-09-17 15:46:19.539644 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:19.539657 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:19.539669 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:19.539682 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:19.539694 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:19.539707 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:19.539719 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:19.539732 | orchestrator | 2025-09-17 15:46:19.539744 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-17 15:46:19.539757 | orchestrator | Wednesday 17 September 2025 15:46:00 +0000 (0:00:01.767) 0:03:05.441 *** 2025-09-17 15:46:19.539769 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:19.539782 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:19.539794 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:19.539806 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:19.539816 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:19.539827 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:19.539838 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:19.539848 | orchestrator | 2025-09-17 15:46:19.539859 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-17 15:46:19.539870 | orchestrator | Wednesday 17 September 2025 15:46:12 +0000 (0:00:11.873) 0:03:17.315 *** 2025-09-17 15:46:19.539892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-17 15:46:19.539923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-17 15:46:19.539968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-17 15:46:19.539983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-17 15:46:19.539994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-17 15:46:19.540005 | orchestrator | 2025-09-17 15:46:19.540016 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-17 15:46:19.540027 | orchestrator | Wednesday 17 September 2025 15:46:13 +0000 (0:00:00.359) 0:03:17.674 *** 2025-09-17 15:46:19.540038 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 15:46:19.540049 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:19.540060 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 15:46:19.540070 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:46:19.540081 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 15:46:19.540092 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:46:19.540102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-17 15:46:19.540113 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:46:19.540124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 15:46:19.540135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 15:46:19.540146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 15:46:19.540156 | orchestrator | 2025-09-17 15:46:19.540167 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-17 15:46:19.540178 | orchestrator | Wednesday 17 September 2025 15:46:14 +0000 (0:00:01.615) 0:03:19.290 *** 2025-09-17 15:46:19.540189 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 15:46:19.540201 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 15:46:19.540212 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 15:46:19.540223 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 15:46:19.540234 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 15:46:19.540244 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 15:46:19.540261 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 15:46:19.540272 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 15:46:19.540282 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 15:46:19.540293 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 15:46:19.540304 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:19.540314 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 15:46:19.540325 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 15:46:19.540336 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 15:46:19.540346 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 15:46:19.540361 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 15:46:19.540372 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 15:46:19.540383 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 15:46:19.540394 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 15:46:19.540404 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 15:46:19.540415 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 15:46:19.540432 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 15:46:22.515738 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 15:46:22.515826 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 15:46:22.515841 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 15:46:22.515853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-17 15:46:22.515864 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 15:46:22.515875 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 15:46:22.515886 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:46:22.515898 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 15:46:22.515909 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-17 15:46:22.515920 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 15:46:22.515931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-17 15:46:22.515941 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 15:46:22.515952 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 15:46:22.515962 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:46:22.515973 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-17 15:46:22.515984 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-17 15:46:22.515994 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-17 15:46:22.516027 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-17 15:46:22.516039 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-17 15:46:22.516049 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-17 15:46:22.516060 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-17 15:46:22.516071 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:46:22.516081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 15:46:22.516092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 15:46:22.516103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 15:46:22.516113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 15:46:22.516124 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 15:46:22.516135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 15:46:22.516145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-17 15:46:22.516166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-17 15:46:22.516187 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-17 15:46:22.516208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 15:46:22.516219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516229 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 15:46:22.516255 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-17 15:46:22.516277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 15:46:22.516288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 15:46:22.516301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-17 15:46:22.516313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 15:46:22.516341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 15:46:22.516354 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-17 15:46:22.516366 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 15:46:22.516378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 15:46:22.516390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-17 15:46:22.516401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 15:46:22.516413 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-17 15:46:22.516431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 15:46:22.516443 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-17 15:46:22.516455 | orchestrator | 2025-09-17 15:46:22.516468 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-17 15:46:22.516480 | orchestrator | Wednesday 17 September 2025 15:46:19 +0000 (0:00:04.699) 0:03:23.989 *** 2025-09-17 15:46:22.516492 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516504 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516573 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-17 15:46:22.516601 | orchestrator | 2025-09-17 15:46:22.516612 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-17 15:46:22.516625 | orchestrator | Wednesday 17 September 2025 15:46:21 +0000 (0:00:01.559) 0:03:25.548 *** 2025-09-17 15:46:22.516637 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 15:46:22.516649 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:22.516660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 15:46:22.516670 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:46:22.516681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 15:46:22.516692 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:46:22.516702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-17 15:46:22.516713 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:46:22.516724 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 15:46:22.516735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 15:46:22.516745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-17 15:46:22.516756 | orchestrator | 2025-09-17 15:46:22.516767 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-17 15:46:22.516778 | orchestrator | Wednesday 17 September 2025 15:46:21 +0000 (0:00:00.566) 0:03:26.114 *** 2025-09-17 15:46:22.516788 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 15:46:22.516799 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:22.516809 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 15:46:22.516820 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:46:22.516830 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 15:46:22.516841 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-17 15:46:22.516852 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:46:22.516862 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:46:22.516878 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 15:46:22.516889 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 15:46:22.516906 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-17 15:46:22.516917 | orchestrator | 2025-09-17 15:46:22.516927 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-17 15:46:22.516938 | orchestrator | Wednesday 17 September 2025 15:46:22 +0000 (0:00:00.634) 0:03:26.748 *** 2025-09-17 15:46:22.516949 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:22.516959 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:46:22.516970 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:46:22.516981 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:46:22.516991 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:46:22.517009 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:46:34.015652 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:46:34.015766 | orchestrator | 2025-09-17 15:46:34.015785 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-17 15:46:34.015798 | orchestrator | Wednesday 17 September 2025 15:46:22 +0000 (0:00:00.227) 0:03:26.976 *** 2025-09-17 15:46:34.015810 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:34.015822 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:34.015833 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:34.015844 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:34.015855 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:34.015866 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:34.015880 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:34.015899 | orchestrator | 2025-09-17 15:46:34.015918 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-17 15:46:34.015936 | orchestrator | Wednesday 17 September 2025 15:46:28 +0000 (0:00:05.574) 0:03:32.551 *** 2025-09-17 15:46:34.015955 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-17 15:46:34.015973 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:34.015992 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-17 15:46:34.016009 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-17 15:46:34.016030 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:46:34.016047 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-17 15:46:34.016066 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:46:34.016084 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-17 15:46:34.016124 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:46:34.016154 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:46:34.016167 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-17 15:46:34.016180 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:46:34.016193 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-17 15:46:34.016206 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:46:34.016219 | orchestrator | 2025-09-17 15:46:34.016231 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-17 15:46:34.016244 | orchestrator | Wednesday 17 September 2025 15:46:28 +0000 (0:00:00.289) 0:03:32.841 *** 2025-09-17 15:46:34.016257 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-17 15:46:34.016270 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-17 15:46:34.016282 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-17 15:46:34.016295 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-17 15:46:34.016307 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-17 15:46:34.016319 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-17 15:46:34.016331 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-17 15:46:34.016344 | orchestrator | 2025-09-17 15:46:34.016357 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-17 15:46:34.016369 | orchestrator | Wednesday 17 September 2025 15:46:29 +0000 (0:00:00.988) 0:03:33.829 *** 2025-09-17 15:46:34.016385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:46:34.016427 | orchestrator | 2025-09-17 15:46:34.016440 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-17 15:46:34.016453 | orchestrator | Wednesday 17 September 2025 15:46:29 +0000 (0:00:00.424) 0:03:34.254 *** 2025-09-17 15:46:34.016465 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:34.016478 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:34.016490 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:34.016503 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:34.016515 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:34.016550 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:34.016568 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:34.016585 | orchestrator | 2025-09-17 15:46:34.016603 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-17 15:46:34.016621 | orchestrator | Wednesday 17 September 2025 15:46:31 +0000 (0:00:01.359) 0:03:35.613 *** 2025-09-17 15:46:34.016640 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:34.016658 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:34.016676 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:34.016693 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:34.016704 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:34.016715 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:34.016725 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:34.016736 | orchestrator | 2025-09-17 15:46:34.016747 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-17 15:46:34.016757 | orchestrator | Wednesday 17 September 2025 15:46:31 +0000 (0:00:00.635) 0:03:36.248 *** 2025-09-17 15:46:34.016769 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:34.016780 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:34.016790 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:34.016801 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:34.016811 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:34.016822 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:34.016833 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:34.016843 | orchestrator | 2025-09-17 15:46:34.016854 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-17 15:46:34.016880 | orchestrator | Wednesday 17 September 2025 15:46:32 +0000 (0:00:00.599) 0:03:36.848 *** 2025-09-17 15:46:34.016891 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:34.016902 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:34.016913 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:34.016924 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:34.016934 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:34.016945 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:34.016955 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:34.016966 | orchestrator | 2025-09-17 15:46:34.016977 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-17 15:46:34.016988 | orchestrator | Wednesday 17 September 2025 15:46:32 +0000 (0:00:00.590) 0:03:37.439 *** 2025-09-17 15:46:34.017024 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122506.1823833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017045 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122538.8648918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017080 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122534.1241937, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017100 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122536.9598417, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017118 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122534.727483, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017138 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122546.202, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017158 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758122539.216634, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:34.017202 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985311 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985459 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985476 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985505 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985518 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985573 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 15:46:58.985586 | orchestrator | 2025-09-17 15:46:58.985600 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-17 15:46:58.985613 | orchestrator | Wednesday 17 September 2025 15:46:33 +0000 (0:00:01.023) 0:03:38.463 *** 2025-09-17 15:46:58.985624 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:58.985636 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.985647 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.985657 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:58.985668 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:58.985679 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:58.985690 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:58.985700 | orchestrator | 2025-09-17 15:46:58.985712 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-17 15:46:58.985724 | orchestrator | Wednesday 17 September 2025 15:46:35 +0000 (0:00:01.169) 0:03:39.633 *** 2025-09-17 15:46:58.985744 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:58.985755 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.985766 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.985776 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:58.985804 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:58.985816 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:58.985827 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:58.985837 | orchestrator | 2025-09-17 15:46:58.985850 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-17 15:46:58.985862 | orchestrator | Wednesday 17 September 2025 15:46:36 +0000 (0:00:01.237) 0:03:40.870 *** 2025-09-17 15:46:58.985875 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:58.985887 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.985899 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.985911 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:58.985922 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:58.985934 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:58.985946 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:58.985959 | orchestrator | 2025-09-17 15:46:58.985971 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-17 15:46:58.985983 | orchestrator | Wednesday 17 September 2025 15:46:37 +0000 (0:00:01.103) 0:03:41.973 *** 2025-09-17 15:46:58.985995 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:46:58.986007 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:46:58.986075 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:46:58.986089 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:46:58.986101 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:46:58.986113 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:46:58.986123 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:46:58.986134 | orchestrator | 2025-09-17 15:46:58.986145 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-17 15:46:58.986155 | orchestrator | Wednesday 17 September 2025 15:46:37 +0000 (0:00:00.267) 0:03:42.240 *** 2025-09-17 15:46:58.986177 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:58.986189 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:58.986200 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:58.986210 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:58.986221 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:58.986232 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:58.986242 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:58.986253 | orchestrator | 2025-09-17 15:46:58.986264 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-17 15:46:58.986274 | orchestrator | Wednesday 17 September 2025 15:46:38 +0000 (0:00:00.720) 0:03:42.961 *** 2025-09-17 15:46:58.986287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:46:58.986300 | orchestrator | 2025-09-17 15:46:58.986311 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-17 15:46:58.986322 | orchestrator | Wednesday 17 September 2025 15:46:38 +0000 (0:00:00.466) 0:03:43.428 *** 2025-09-17 15:46:58.986333 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:58.986343 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.986354 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.986365 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:58.986375 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:58.986386 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:58.986397 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:58.986407 | orchestrator | 2025-09-17 15:46:58.986418 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-17 15:46:58.986429 | orchestrator | Wednesday 17 September 2025 15:46:46 +0000 (0:00:07.999) 0:03:51.428 *** 2025-09-17 15:46:58.986439 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:58.986458 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:58.986469 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:58.986479 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:58.986490 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:58.986500 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:58.986511 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:58.986521 | orchestrator | 2025-09-17 15:46:58.986553 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-17 15:46:58.986564 | orchestrator | Wednesday 17 September 2025 15:46:48 +0000 (0:00:01.317) 0:03:52.745 *** 2025-09-17 15:46:58.986575 | orchestrator | ok: [testbed-manager] 2025-09-17 15:46:58.986585 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:46:58.986596 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:46:58.986607 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:46:58.986617 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:46:58.986628 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:46:58.986638 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:46:58.986649 | orchestrator | 2025-09-17 15:46:58.986660 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-17 15:46:58.986670 | orchestrator | Wednesday 17 September 2025 15:46:49 +0000 (0:00:01.059) 0:03:53.804 *** 2025-09-17 15:46:58.986687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:46:58.986699 | orchestrator | 2025-09-17 15:46:58.986710 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-17 15:46:58.986720 | orchestrator | Wednesday 17 September 2025 15:46:49 +0000 (0:00:00.470) 0:03:54.275 *** 2025-09-17 15:46:58.986731 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.986742 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:58.986752 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.986763 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:46:58.986774 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:46:58.986785 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:46:58.986795 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:46:58.986806 | orchestrator | 2025-09-17 15:46:58.986817 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-17 15:46:58.986827 | orchestrator | Wednesday 17 September 2025 15:46:58 +0000 (0:00:08.504) 0:04:02.779 *** 2025-09-17 15:46:58.986838 | orchestrator | changed: [testbed-manager] 2025-09-17 15:46:58.986849 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:46:58.986859 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:46:58.986878 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.017103 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.017221 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.017237 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.017250 | orchestrator | 2025-09-17 15:48:06.017262 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-17 15:48:06.017275 | orchestrator | Wednesday 17 September 2025 15:46:58 +0000 (0:00:00.653) 0:04:03.433 *** 2025-09-17 15:48:06.017286 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.017297 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.017308 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.017318 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.017329 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.017340 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.017351 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.017362 | orchestrator | 2025-09-17 15:48:06.017373 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-17 15:48:06.017384 | orchestrator | Wednesday 17 September 2025 15:47:00 +0000 (0:00:01.101) 0:04:04.535 *** 2025-09-17 15:48:06.017395 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.017406 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.017443 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.017454 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.017465 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.017475 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.017486 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.017496 | orchestrator | 2025-09-17 15:48:06.017507 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-17 15:48:06.017518 | orchestrator | Wednesday 17 September 2025 15:47:01 +0000 (0:00:01.820) 0:04:06.356 *** 2025-09-17 15:48:06.017555 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:06.017568 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:06.017579 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:06.017589 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:06.017600 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:06.017610 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:06.017621 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:06.017634 | orchestrator | 2025-09-17 15:48:06.017648 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-17 15:48:06.017662 | orchestrator | Wednesday 17 September 2025 15:47:02 +0000 (0:00:00.274) 0:04:06.630 *** 2025-09-17 15:48:06.017674 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:06.017687 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:06.017699 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:06.017711 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:06.017723 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:06.017736 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:06.017749 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:06.017762 | orchestrator | 2025-09-17 15:48:06.017774 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-17 15:48:06.017787 | orchestrator | Wednesday 17 September 2025 15:47:02 +0000 (0:00:00.293) 0:04:06.924 *** 2025-09-17 15:48:06.017800 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:06.017812 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:06.017824 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:06.017837 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:06.017850 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:06.017862 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:06.017874 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:06.017887 | orchestrator | 2025-09-17 15:48:06.017899 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-17 15:48:06.017912 | orchestrator | Wednesday 17 September 2025 15:47:02 +0000 (0:00:00.288) 0:04:07.212 *** 2025-09-17 15:48:06.017924 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:06.017936 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:06.017949 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:06.017961 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:06.017974 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:06.017985 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:06.017995 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:06.018006 | orchestrator | 2025-09-17 15:48:06.018077 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-17 15:48:06.018090 | orchestrator | Wednesday 17 September 2025 15:47:08 +0000 (0:00:05.667) 0:04:12.880 *** 2025-09-17 15:48:06.018102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:48:06.018116 | orchestrator | 2025-09-17 15:48:06.018127 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-17 15:48:06.018147 | orchestrator | Wednesday 17 September 2025 15:47:08 +0000 (0:00:00.388) 0:04:13.269 *** 2025-09-17 15:48:06.018158 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018169 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-17 15:48:06.018180 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018216 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-17 15:48:06.018229 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:06.018239 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018250 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-17 15:48:06.018261 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:06.018272 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018282 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:06.018293 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-17 15:48:06.018303 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018314 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-17 15:48:06.018324 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:06.018335 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018346 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:06.018356 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-17 15:48:06.018385 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:06.018397 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-17 15:48:06.018408 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-17 15:48:06.018418 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:06.018429 | orchestrator | 2025-09-17 15:48:06.018439 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-17 15:48:06.018450 | orchestrator | Wednesday 17 September 2025 15:47:09 +0000 (0:00:00.346) 0:04:13.616 *** 2025-09-17 15:48:06.018461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:48:06.018472 | orchestrator | 2025-09-17 15:48:06.018483 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-17 15:48:06.018494 | orchestrator | Wednesday 17 September 2025 15:47:09 +0000 (0:00:00.408) 0:04:14.025 *** 2025-09-17 15:48:06.018504 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-17 15:48:06.018515 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-17 15:48:06.018525 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:06.018557 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:06.018568 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-17 15:48:06.018579 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:06.018589 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-17 15:48:06.018600 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:06.018610 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-17 15:48:06.018621 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-17 15:48:06.018632 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:06.018642 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:06.018653 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-17 15:48:06.018663 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:06.018674 | orchestrator | 2025-09-17 15:48:06.018685 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-17 15:48:06.018695 | orchestrator | Wednesday 17 September 2025 15:47:09 +0000 (0:00:00.322) 0:04:14.347 *** 2025-09-17 15:48:06.018706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:48:06.018717 | orchestrator | 2025-09-17 15:48:06.018728 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-17 15:48:06.018747 | orchestrator | Wednesday 17 September 2025 15:47:10 +0000 (0:00:00.514) 0:04:14.861 *** 2025-09-17 15:48:06.018758 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.018768 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.018779 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.018790 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.018800 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.018811 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.018821 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.018831 | orchestrator | 2025-09-17 15:48:06.018842 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-17 15:48:06.018853 | orchestrator | Wednesday 17 September 2025 15:47:43 +0000 (0:00:32.989) 0:04:47.851 *** 2025-09-17 15:48:06.018863 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.018874 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.018884 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.018895 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.018905 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.018916 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.018926 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.018937 | orchestrator | 2025-09-17 15:48:06.018947 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-17 15:48:06.018958 | orchestrator | Wednesday 17 September 2025 15:47:51 +0000 (0:00:07.680) 0:04:55.532 *** 2025-09-17 15:48:06.018969 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.018979 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.018990 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.019000 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.019011 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.019021 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.019031 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.019042 | orchestrator | 2025-09-17 15:48:06.019053 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-17 15:48:06.019064 | orchestrator | Wednesday 17 September 2025 15:47:58 +0000 (0:00:07.479) 0:05:03.012 *** 2025-09-17 15:48:06.019074 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:06.019085 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:06.019096 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:06.019106 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:06.019117 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:06.019127 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:06.019138 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:06.019148 | orchestrator | 2025-09-17 15:48:06.019159 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-17 15:48:06.019170 | orchestrator | Wednesday 17 September 2025 15:48:00 +0000 (0:00:01.632) 0:05:04.644 *** 2025-09-17 15:48:06.019180 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:06.019191 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:06.019202 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:06.019212 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:06.019222 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:06.019233 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:06.019243 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:06.019254 | orchestrator | 2025-09-17 15:48:06.019265 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-17 15:48:06.019283 | orchestrator | Wednesday 17 September 2025 15:48:05 +0000 (0:00:05.820) 0:05:10.465 *** 2025-09-17 15:48:16.701618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:48:16.701740 | orchestrator | 2025-09-17 15:48:16.701757 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-17 15:48:16.701795 | orchestrator | Wednesday 17 September 2025 15:48:06 +0000 (0:00:00.451) 0:05:10.917 *** 2025-09-17 15:48:16.701807 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:16.701819 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:16.701830 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:16.701840 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:16.701851 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:16.701861 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:16.701872 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:16.701883 | orchestrator | 2025-09-17 15:48:16.701894 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-17 15:48:16.701904 | orchestrator | Wednesday 17 September 2025 15:48:07 +0000 (0:00:00.711) 0:05:11.628 *** 2025-09-17 15:48:16.701915 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:16.701926 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:16.701937 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:16.701948 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:16.701958 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:16.701969 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:16.701979 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:16.701990 | orchestrator | 2025-09-17 15:48:16.702001 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-17 15:48:16.702011 | orchestrator | Wednesday 17 September 2025 15:48:08 +0000 (0:00:01.689) 0:05:13.318 *** 2025-09-17 15:48:16.702074 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:48:16.702086 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:48:16.702097 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:48:16.702110 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:48:16.702122 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:48:16.702133 | orchestrator | changed: [testbed-manager] 2025-09-17 15:48:16.702153 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:48:16.702164 | orchestrator | 2025-09-17 15:48:16.702977 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-17 15:48:16.703068 | orchestrator | Wednesday 17 September 2025 15:48:09 +0000 (0:00:00.789) 0:05:14.107 *** 2025-09-17 15:48:16.703086 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.703101 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.703112 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.703123 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:16.703133 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:16.703144 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:16.703155 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:16.703166 | orchestrator | 2025-09-17 15:48:16.703178 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-17 15:48:16.703189 | orchestrator | Wednesday 17 September 2025 15:48:09 +0000 (0:00:00.294) 0:05:14.402 *** 2025-09-17 15:48:16.703200 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.703211 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.703222 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.703232 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:16.703243 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:16.703253 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:16.703264 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:16.703275 | orchestrator | 2025-09-17 15:48:16.703286 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-17 15:48:16.703296 | orchestrator | Wednesday 17 September 2025 15:48:10 +0000 (0:00:00.406) 0:05:14.809 *** 2025-09-17 15:48:16.703307 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:16.703319 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:16.703330 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:16.703340 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:16.703351 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:16.703362 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:16.703372 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:16.703414 | orchestrator | 2025-09-17 15:48:16.703443 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-17 15:48:16.703455 | orchestrator | Wednesday 17 September 2025 15:48:10 +0000 (0:00:00.295) 0:05:15.104 *** 2025-09-17 15:48:16.703466 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.703477 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.703487 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.703498 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:16.703508 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:16.703519 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:16.703581 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:16.703594 | orchestrator | 2025-09-17 15:48:16.703605 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-17 15:48:16.703623 | orchestrator | Wednesday 17 September 2025 15:48:10 +0000 (0:00:00.289) 0:05:15.394 *** 2025-09-17 15:48:16.703634 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:16.703645 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:16.703656 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:16.703666 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:16.703677 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:16.703688 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:16.703698 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:16.703709 | orchestrator | 2025-09-17 15:48:16.703720 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-17 15:48:16.703731 | orchestrator | Wednesday 17 September 2025 15:48:11 +0000 (0:00:00.316) 0:05:15.710 *** 2025-09-17 15:48:16.703742 | orchestrator | ok: [testbed-manager] =>  2025-09-17 15:48:16.703752 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703763 | orchestrator | ok: [testbed-node-0] =>  2025-09-17 15:48:16.703774 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703784 | orchestrator | ok: [testbed-node-1] =>  2025-09-17 15:48:16.703795 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703805 | orchestrator | ok: [testbed-node-2] =>  2025-09-17 15:48:16.703816 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703827 | orchestrator | ok: [testbed-node-3] =>  2025-09-17 15:48:16.703837 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703873 | orchestrator | ok: [testbed-node-4] =>  2025-09-17 15:48:16.703885 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703895 | orchestrator | ok: [testbed-node-5] =>  2025-09-17 15:48:16.703906 | orchestrator |  docker_version: 5:27.5.1 2025-09-17 15:48:16.703917 | orchestrator | 2025-09-17 15:48:16.703928 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-17 15:48:16.703938 | orchestrator | Wednesday 17 September 2025 15:48:11 +0000 (0:00:00.268) 0:05:15.979 *** 2025-09-17 15:48:16.703949 | orchestrator | ok: [testbed-manager] =>  2025-09-17 15:48:16.703960 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.703970 | orchestrator | ok: [testbed-node-0] =>  2025-09-17 15:48:16.703981 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.703991 | orchestrator | ok: [testbed-node-1] =>  2025-09-17 15:48:16.704002 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.704013 | orchestrator | ok: [testbed-node-2] =>  2025-09-17 15:48:16.704023 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.704034 | orchestrator | ok: [testbed-node-3] =>  2025-09-17 15:48:16.704044 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.704055 | orchestrator | ok: [testbed-node-4] =>  2025-09-17 15:48:16.704065 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.704076 | orchestrator | ok: [testbed-node-5] =>  2025-09-17 15:48:16.704087 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-17 15:48:16.704097 | orchestrator | 2025-09-17 15:48:16.704108 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-17 15:48:16.704119 | orchestrator | Wednesday 17 September 2025 15:48:11 +0000 (0:00:00.364) 0:05:16.344 *** 2025-09-17 15:48:16.704130 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.704150 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.704161 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.704172 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:16.704182 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:16.704193 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:16.704204 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:16.704214 | orchestrator | 2025-09-17 15:48:16.704225 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-17 15:48:16.704236 | orchestrator | Wednesday 17 September 2025 15:48:12 +0000 (0:00:00.242) 0:05:16.586 *** 2025-09-17 15:48:16.704247 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.704257 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.704268 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.704279 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:48:16.704289 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:48:16.704300 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:48:16.704310 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:48:16.704321 | orchestrator | 2025-09-17 15:48:16.704332 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-17 15:48:16.704343 | orchestrator | Wednesday 17 September 2025 15:48:12 +0000 (0:00:00.266) 0:05:16.853 *** 2025-09-17 15:48:16.704356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:48:16.704370 | orchestrator | 2025-09-17 15:48:16.704381 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-17 15:48:16.704392 | orchestrator | Wednesday 17 September 2025 15:48:12 +0000 (0:00:00.377) 0:05:17.231 *** 2025-09-17 15:48:16.704403 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:16.704413 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:16.704424 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:16.704435 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:16.704445 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:16.704456 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:16.704467 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:16.704478 | orchestrator | 2025-09-17 15:48:16.704488 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-17 15:48:16.704499 | orchestrator | Wednesday 17 September 2025 15:48:13 +0000 (0:00:00.756) 0:05:17.988 *** 2025-09-17 15:48:16.704510 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:48:16.704521 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:48:16.704549 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:48:16.704560 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:48:16.704571 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:48:16.704581 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:48:16.704592 | orchestrator | ok: [testbed-manager] 2025-09-17 15:48:16.704602 | orchestrator | 2025-09-17 15:48:16.704614 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-17 15:48:16.704625 | orchestrator | Wednesday 17 September 2025 15:48:16 +0000 (0:00:02.640) 0:05:20.629 *** 2025-09-17 15:48:16.704636 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-17 15:48:16.704647 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-17 15:48:16.704663 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-17 15:48:16.704674 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-17 15:48:16.704685 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-17 15:48:16.704695 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-17 15:48:16.704706 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:48:16.704717 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-17 15:48:16.704727 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-17 15:48:16.704738 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-17 15:48:16.704755 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:48:16.704766 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-17 15:48:16.704776 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-17 15:48:16.704787 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-17 15:48:16.704798 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:48:16.704808 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-17 15:48:16.704819 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-17 15:48:16.704837 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-17 15:49:15.896790 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:15.896908 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-17 15:49:15.896925 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-17 15:49:15.896937 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-17 15:49:15.896949 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:15.896960 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:15.896971 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-17 15:49:15.896982 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-17 15:49:15.896993 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-17 15:49:15.897003 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:15.897015 | orchestrator | 2025-09-17 15:49:15.897027 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-17 15:49:15.897040 | orchestrator | Wednesday 17 September 2025 15:48:16 +0000 (0:00:00.731) 0:05:21.360 *** 2025-09-17 15:49:15.897051 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897062 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897073 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897084 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897094 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897105 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897116 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897127 | orchestrator | 2025-09-17 15:49:15.897138 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-17 15:49:15.897148 | orchestrator | Wednesday 17 September 2025 15:48:22 +0000 (0:00:05.832) 0:05:27.193 *** 2025-09-17 15:49:15.897159 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897170 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897180 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897191 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897201 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897212 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897223 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897234 | orchestrator | 2025-09-17 15:49:15.897245 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-17 15:49:15.897255 | orchestrator | Wednesday 17 September 2025 15:48:23 +0000 (0:00:01.076) 0:05:28.270 *** 2025-09-17 15:49:15.897266 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897277 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897287 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897317 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897340 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897352 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897364 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897377 | orchestrator | 2025-09-17 15:49:15.897389 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-17 15:49:15.897401 | orchestrator | Wednesday 17 September 2025 15:48:31 +0000 (0:00:07.860) 0:05:36.130 *** 2025-09-17 15:49:15.897413 | orchestrator | changed: [testbed-manager] 2025-09-17 15:49:15.897425 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897437 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897475 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897488 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897500 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897511 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897523 | orchestrator | 2025-09-17 15:49:15.897554 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-17 15:49:15.897567 | orchestrator | Wednesday 17 September 2025 15:48:35 +0000 (0:00:03.358) 0:05:39.488 *** 2025-09-17 15:49:15.897579 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897591 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897603 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897615 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897627 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897638 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897650 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897662 | orchestrator | 2025-09-17 15:49:15.897675 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-17 15:49:15.897686 | orchestrator | Wednesday 17 September 2025 15:48:36 +0000 (0:00:01.532) 0:05:41.021 *** 2025-09-17 15:49:15.897697 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897707 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897718 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897728 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897739 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897750 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897760 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897771 | orchestrator | 2025-09-17 15:49:15.897781 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-17 15:49:15.897792 | orchestrator | Wednesday 17 September 2025 15:48:37 +0000 (0:00:01.326) 0:05:42.347 *** 2025-09-17 15:49:15.897803 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:15.897828 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:15.897839 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:15.897850 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:15.897861 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:15.897871 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:15.897882 | orchestrator | changed: [testbed-manager] 2025-09-17 15:49:15.897892 | orchestrator | 2025-09-17 15:49:15.897903 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-17 15:49:15.897913 | orchestrator | Wednesday 17 September 2025 15:48:38 +0000 (0:00:00.617) 0:05:42.965 *** 2025-09-17 15:49:15.897924 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.897935 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.897945 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.897956 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.897966 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.897976 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.897987 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.897998 | orchestrator | 2025-09-17 15:49:15.898008 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-17 15:49:15.898077 | orchestrator | Wednesday 17 September 2025 15:48:48 +0000 (0:00:09.974) 0:05:52.940 *** 2025-09-17 15:49:15.898089 | orchestrator | changed: [testbed-manager] 2025-09-17 15:49:15.898118 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.898130 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.898140 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.898151 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.898162 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.898172 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.898183 | orchestrator | 2025-09-17 15:49:15.898194 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-17 15:49:15.898205 | orchestrator | Wednesday 17 September 2025 15:48:49 +0000 (0:00:00.887) 0:05:53.827 *** 2025-09-17 15:49:15.898227 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.898238 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.898248 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.898259 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.898270 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.898280 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.898291 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.898302 | orchestrator | 2025-09-17 15:49:15.898312 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-17 15:49:15.898323 | orchestrator | Wednesday 17 September 2025 15:48:58 +0000 (0:00:09.329) 0:06:03.157 *** 2025-09-17 15:49:15.898334 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.898344 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.898355 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.898365 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.898376 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.898387 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.898397 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.898408 | orchestrator | 2025-09-17 15:49:15.898418 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-17 15:49:15.898429 | orchestrator | Wednesday 17 September 2025 15:49:09 +0000 (0:00:10.848) 0:06:14.006 *** 2025-09-17 15:49:15.898440 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-17 15:49:15.898451 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-17 15:49:15.898461 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-17 15:49:15.898748 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-17 15:49:15.898770 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-17 15:49:15.898781 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-17 15:49:15.898792 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-17 15:49:15.898802 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-17 15:49:15.898813 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-17 15:49:15.898823 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-17 15:49:15.898833 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-17 15:49:15.898844 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-17 15:49:15.898854 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-17 15:49:15.898865 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-17 15:49:15.898875 | orchestrator | 2025-09-17 15:49:15.898886 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-17 15:49:15.898897 | orchestrator | Wednesday 17 September 2025 15:49:10 +0000 (0:00:01.178) 0:06:15.184 *** 2025-09-17 15:49:15.898907 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:15.898918 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:15.898928 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:15.898938 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:15.898949 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:15.898959 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:15.898970 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:15.898980 | orchestrator | 2025-09-17 15:49:15.898991 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-17 15:49:15.899002 | orchestrator | Wednesday 17 September 2025 15:49:11 +0000 (0:00:00.508) 0:06:15.692 *** 2025-09-17 15:49:15.899012 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:15.899023 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:15.899033 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:15.899044 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:15.899054 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:15.899064 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:15.899075 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:15.899085 | orchestrator | 2025-09-17 15:49:15.899096 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-17 15:49:15.899118 | orchestrator | Wednesday 17 September 2025 15:49:15 +0000 (0:00:03.858) 0:06:19.551 *** 2025-09-17 15:49:15.899129 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:15.899139 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:15.899150 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:15.899160 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:15.899170 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:15.899189 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:15.899200 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:15.899211 | orchestrator | 2025-09-17 15:49:15.899222 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-17 15:49:15.899233 | orchestrator | Wednesday 17 September 2025 15:49:15 +0000 (0:00:00.504) 0:06:20.056 *** 2025-09-17 15:49:15.899244 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-17 15:49:15.899254 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-17 15:49:15.899265 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:15.899275 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-17 15:49:15.899286 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-17 15:49:15.899296 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:15.899307 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-17 15:49:15.899318 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-17 15:49:15.899328 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:15.899339 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-17 15:49:15.899359 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-17 15:49:34.878184 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:34.878303 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-17 15:49:34.878319 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-17 15:49:34.878332 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-17 15:49:34.878397 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-17 15:49:34.878412 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:34.878423 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:34.878434 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-17 15:49:34.878445 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-17 15:49:34.878456 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:34.878467 | orchestrator | 2025-09-17 15:49:34.878480 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-17 15:49:34.878492 | orchestrator | Wednesday 17 September 2025 15:49:16 +0000 (0:00:00.548) 0:06:20.605 *** 2025-09-17 15:49:34.878503 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:34.878514 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:34.878525 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:34.878576 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:34.878588 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:34.878599 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:34.878610 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:34.878621 | orchestrator | 2025-09-17 15:49:34.878632 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-17 15:49:34.878643 | orchestrator | Wednesday 17 September 2025 15:49:16 +0000 (0:00:00.480) 0:06:21.085 *** 2025-09-17 15:49:34.878654 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:34.878664 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:34.878675 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:34.878685 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:34.878696 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:34.878707 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:34.878745 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:34.878758 | orchestrator | 2025-09-17 15:49:34.878771 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-17 15:49:34.878783 | orchestrator | Wednesday 17 September 2025 15:49:17 +0000 (0:00:00.471) 0:06:21.557 *** 2025-09-17 15:49:34.878795 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:34.878807 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:49:34.878820 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:49:34.878833 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:49:34.878845 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:49:34.878857 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:49:34.878870 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:49:34.878882 | orchestrator | 2025-09-17 15:49:34.878894 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-17 15:49:34.878907 | orchestrator | Wednesday 17 September 2025 15:49:17 +0000 (0:00:00.672) 0:06:22.230 *** 2025-09-17 15:49:34.878919 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.878932 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.878944 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.878955 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.878967 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.878980 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.878992 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.879004 | orchestrator | 2025-09-17 15:49:34.879016 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-17 15:49:34.879028 | orchestrator | Wednesday 17 September 2025 15:49:19 +0000 (0:00:01.770) 0:06:24.000 *** 2025-09-17 15:49:34.879042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:49:34.879057 | orchestrator | 2025-09-17 15:49:34.879069 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-17 15:49:34.879080 | orchestrator | Wednesday 17 September 2025 15:49:20 +0000 (0:00:00.815) 0:06:24.816 *** 2025-09-17 15:49:34.879091 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879101 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:34.879112 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:34.879123 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:34.879133 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:34.879143 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:34.879154 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:34.879164 | orchestrator | 2025-09-17 15:49:34.879175 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-17 15:49:34.879186 | orchestrator | Wednesday 17 September 2025 15:49:21 +0000 (0:00:00.819) 0:06:25.636 *** 2025-09-17 15:49:34.879196 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879207 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:34.879218 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:34.879228 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:34.879239 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:34.879249 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:34.879260 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:34.879270 | orchestrator | 2025-09-17 15:49:34.879281 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-17 15:49:34.879292 | orchestrator | Wednesday 17 September 2025 15:49:22 +0000 (0:00:01.112) 0:06:26.749 *** 2025-09-17 15:49:34.879303 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879313 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:34.879324 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:34.879334 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:34.879345 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:34.879355 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:34.879366 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:34.879384 | orchestrator | 2025-09-17 15:49:34.879395 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-17 15:49:34.879406 | orchestrator | Wednesday 17 September 2025 15:49:23 +0000 (0:00:01.330) 0:06:28.080 *** 2025-09-17 15:49:34.879435 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:49:34.879446 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.879457 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.879468 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.879478 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.879489 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.879500 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.879510 | orchestrator | 2025-09-17 15:49:34.879521 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-17 15:49:34.879532 | orchestrator | Wednesday 17 September 2025 15:49:25 +0000 (0:00:01.390) 0:06:29.470 *** 2025-09-17 15:49:34.879575 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879586 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:34.879596 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:34.879607 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:34.879617 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:34.879628 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:34.879638 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:34.879649 | orchestrator | 2025-09-17 15:49:34.879659 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-17 15:49:34.879670 | orchestrator | Wednesday 17 September 2025 15:49:26 +0000 (0:00:01.328) 0:06:30.799 *** 2025-09-17 15:49:34.879681 | orchestrator | changed: [testbed-manager] 2025-09-17 15:49:34.879691 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:49:34.879702 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:49:34.879712 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:49:34.879723 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:49:34.879733 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:49:34.879744 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:49:34.879754 | orchestrator | 2025-09-17 15:49:34.879765 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-17 15:49:34.879776 | orchestrator | Wednesday 17 September 2025 15:49:27 +0000 (0:00:01.556) 0:06:32.356 *** 2025-09-17 15:49:34.879805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:49:34.879817 | orchestrator | 2025-09-17 15:49:34.879828 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-17 15:49:34.879839 | orchestrator | Wednesday 17 September 2025 15:49:28 +0000 (0:00:00.839) 0:06:33.195 *** 2025-09-17 15:49:34.879849 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879860 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.879871 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.879881 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.879892 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.879902 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.879913 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.879923 | orchestrator | 2025-09-17 15:49:34.879934 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-17 15:49:34.879944 | orchestrator | Wednesday 17 September 2025 15:49:30 +0000 (0:00:01.390) 0:06:34.586 *** 2025-09-17 15:49:34.879955 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.879966 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.879977 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.879987 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.879998 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.880008 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.880018 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.880029 | orchestrator | 2025-09-17 15:49:34.880040 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-17 15:49:34.880058 | orchestrator | Wednesday 17 September 2025 15:49:31 +0000 (0:00:01.119) 0:06:35.706 *** 2025-09-17 15:49:34.880069 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.880079 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.880090 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.880101 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.880111 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.880121 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.880132 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.880142 | orchestrator | 2025-09-17 15:49:34.880153 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-17 15:49:34.880164 | orchestrator | Wednesday 17 September 2025 15:49:32 +0000 (0:00:01.307) 0:06:37.013 *** 2025-09-17 15:49:34.880175 | orchestrator | ok: [testbed-manager] 2025-09-17 15:49:34.880185 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:49:34.880195 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:49:34.880206 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:49:34.880216 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:49:34.880227 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:49:34.880237 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:49:34.880248 | orchestrator | 2025-09-17 15:49:34.880259 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-17 15:49:34.880269 | orchestrator | Wednesday 17 September 2025 15:49:33 +0000 (0:00:01.138) 0:06:38.152 *** 2025-09-17 15:49:34.880286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:49:34.880297 | orchestrator | 2025-09-17 15:49:34.880308 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:49:34.880319 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.889) 0:06:39.042 *** 2025-09-17 15:49:34.880329 | orchestrator | 2025-09-17 15:49:34.880340 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:49:34.880350 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.038) 0:06:39.080 *** 2025-09-17 15:49:34.880361 | orchestrator | 2025-09-17 15:49:34.880372 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:49:34.880382 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.044) 0:06:39.125 *** 2025-09-17 15:49:34.880393 | orchestrator | 2025-09-17 15:49:34.880403 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:49:34.880414 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.038) 0:06:39.164 *** 2025-09-17 15:49:34.880425 | orchestrator | 2025-09-17 15:49:34.880443 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:50:02.203493 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.038) 0:06:39.202 *** 2025-09-17 15:50:02.203660 | orchestrator | 2025-09-17 15:50:02.203678 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:50:02.203689 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.045) 0:06:39.248 *** 2025-09-17 15:50:02.203700 | orchestrator | 2025-09-17 15:50:02.203712 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-17 15:50:02.203722 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.037) 0:06:39.286 *** 2025-09-17 15:50:02.203733 | orchestrator | 2025-09-17 15:50:02.203744 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-17 15:50:02.203755 | orchestrator | Wednesday 17 September 2025 15:49:34 +0000 (0:00:00.037) 0:06:39.324 *** 2025-09-17 15:50:02.203766 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:02.203778 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:02.203789 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:02.203799 | orchestrator | 2025-09-17 15:50:02.203810 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-17 15:50:02.203864 | orchestrator | Wednesday 17 September 2025 15:49:36 +0000 (0:00:01.631) 0:06:40.955 *** 2025-09-17 15:50:02.203876 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:02.203887 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:02.203898 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:02.203909 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:02.203919 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:02.203930 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:02.203941 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:02.203951 | orchestrator | 2025-09-17 15:50:02.203962 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-17 15:50:02.203973 | orchestrator | Wednesday 17 September 2025 15:49:37 +0000 (0:00:01.321) 0:06:42.277 *** 2025-09-17 15:50:02.203984 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:02.203994 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:02.204005 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:02.204016 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:02.204026 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:02.204039 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:02.204053 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:02.204066 | orchestrator | 2025-09-17 15:50:02.204078 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-17 15:50:02.204090 | orchestrator | Wednesday 17 September 2025 15:49:38 +0000 (0:00:01.128) 0:06:43.405 *** 2025-09-17 15:50:02.204103 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:02.204115 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:02.204127 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:02.204139 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:02.204152 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:02.204163 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:02.204176 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:02.204188 | orchestrator | 2025-09-17 15:50:02.204201 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-17 15:50:02.204213 | orchestrator | Wednesday 17 September 2025 15:49:41 +0000 (0:00:02.428) 0:06:45.834 *** 2025-09-17 15:50:02.204225 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:02.204237 | orchestrator | 2025-09-17 15:50:02.204249 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-17 15:50:02.204262 | orchestrator | Wednesday 17 September 2025 15:49:41 +0000 (0:00:00.108) 0:06:45.942 *** 2025-09-17 15:50:02.204274 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.204286 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:02.204298 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:02.204310 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:02.204322 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:02.204334 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:02.204345 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:02.204355 | orchestrator | 2025-09-17 15:50:02.204366 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-17 15:50:02.204378 | orchestrator | Wednesday 17 September 2025 15:49:42 +0000 (0:00:01.001) 0:06:46.943 *** 2025-09-17 15:50:02.204388 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:02.204399 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:02.204409 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:02.204420 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:02.204430 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:02.204441 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:02.204451 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:02.204462 | orchestrator | 2025-09-17 15:50:02.204473 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-17 15:50:02.204483 | orchestrator | Wednesday 17 September 2025 15:49:43 +0000 (0:00:00.686) 0:06:47.629 *** 2025-09-17 15:50:02.204510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:02.204534 | orchestrator | 2025-09-17 15:50:02.204569 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-17 15:50:02.204579 | orchestrator | Wednesday 17 September 2025 15:49:44 +0000 (0:00:00.893) 0:06:48.523 *** 2025-09-17 15:50:02.204590 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.204600 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:02.204611 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:02.204622 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:02.204632 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:02.204643 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:02.204653 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:02.204664 | orchestrator | 2025-09-17 15:50:02.204674 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-17 15:50:02.204685 | orchestrator | Wednesday 17 September 2025 15:49:44 +0000 (0:00:00.839) 0:06:49.363 *** 2025-09-17 15:50:02.204696 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-17 15:50:02.204707 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-17 15:50:02.204734 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-17 15:50:02.204746 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-17 15:50:02.204757 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-17 15:50:02.204767 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-17 15:50:02.204778 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-17 15:50:02.204789 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-17 15:50:02.204800 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-17 15:50:02.204811 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-17 15:50:02.204821 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-17 15:50:02.204832 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-17 15:50:02.204842 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-17 15:50:02.204853 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-17 15:50:02.204863 | orchestrator | 2025-09-17 15:50:02.204874 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-17 15:50:02.204884 | orchestrator | Wednesday 17 September 2025 15:49:47 +0000 (0:00:02.861) 0:06:52.224 *** 2025-09-17 15:50:02.204895 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:02.204906 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:02.204917 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:02.204927 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:02.204938 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:02.204948 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:02.204959 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:02.204969 | orchestrator | 2025-09-17 15:50:02.204980 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-17 15:50:02.204990 | orchestrator | Wednesday 17 September 2025 15:49:48 +0000 (0:00:00.534) 0:06:52.759 *** 2025-09-17 15:50:02.205002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:02.205014 | orchestrator | 2025-09-17 15:50:02.205026 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-17 15:50:02.205036 | orchestrator | Wednesday 17 September 2025 15:49:49 +0000 (0:00:00.803) 0:06:53.563 *** 2025-09-17 15:50:02.205047 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.205057 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:02.205068 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:02.205086 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:02.205097 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:02.205107 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:02.205118 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:02.205128 | orchestrator | 2025-09-17 15:50:02.205139 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-17 15:50:02.205150 | orchestrator | Wednesday 17 September 2025 15:49:50 +0000 (0:00:01.068) 0:06:54.631 *** 2025-09-17 15:50:02.205162 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.205179 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:02.205197 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:02.205215 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:02.205234 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:02.205252 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:02.205269 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:02.205287 | orchestrator | 2025-09-17 15:50:02.205304 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-17 15:50:02.205322 | orchestrator | Wednesday 17 September 2025 15:49:51 +0000 (0:00:00.850) 0:06:55.482 *** 2025-09-17 15:50:02.205342 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:02.205361 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:02.205379 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:02.205399 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:02.205417 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:02.205435 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:02.205450 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:02.205460 | orchestrator | 2025-09-17 15:50:02.205471 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-17 15:50:02.205482 | orchestrator | Wednesday 17 September 2025 15:49:51 +0000 (0:00:00.489) 0:06:55.971 *** 2025-09-17 15:50:02.205492 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.205503 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:02.205513 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:02.205524 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:02.205563 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:02.205584 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:02.205603 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:02.205621 | orchestrator | 2025-09-17 15:50:02.205644 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-17 15:50:02.205656 | orchestrator | Wednesday 17 September 2025 15:49:53 +0000 (0:00:01.583) 0:06:57.555 *** 2025-09-17 15:50:02.205667 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:02.205677 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:02.205688 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:02.205698 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:02.205709 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:02.205719 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:02.205730 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:02.205740 | orchestrator | 2025-09-17 15:50:02.205751 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-17 15:50:02.205762 | orchestrator | Wednesday 17 September 2025 15:49:53 +0000 (0:00:00.532) 0:06:58.087 *** 2025-09-17 15:50:02.205772 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:02.205783 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:02.205793 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:02.205804 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:02.205814 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:02.205824 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:02.205835 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:02.205846 | orchestrator | 2025-09-17 15:50:02.205865 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-17 15:50:34.328671 | orchestrator | Wednesday 17 September 2025 15:50:02 +0000 (0:00:08.562) 0:07:06.650 *** 2025-09-17 15:50:34.328791 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.328810 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:34.328847 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:34.328859 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:34.328870 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:34.328880 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:34.328891 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:34.328902 | orchestrator | 2025-09-17 15:50:34.328914 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-17 15:50:34.328924 | orchestrator | Wednesday 17 September 2025 15:50:03 +0000 (0:00:01.489) 0:07:08.139 *** 2025-09-17 15:50:34.328935 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.328946 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:34.328956 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:34.328967 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:34.328978 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:34.328988 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:34.328999 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:34.329009 | orchestrator | 2025-09-17 15:50:34.329020 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-17 15:50:34.329031 | orchestrator | Wednesday 17 September 2025 15:50:05 +0000 (0:00:01.760) 0:07:09.899 *** 2025-09-17 15:50:34.329041 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329052 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:34.329062 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:34.329073 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:34.329083 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:34.329094 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:34.329105 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:34.329115 | orchestrator | 2025-09-17 15:50:34.329126 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 15:50:34.329136 | orchestrator | Wednesday 17 September 2025 15:50:06 +0000 (0:00:01.502) 0:07:11.401 *** 2025-09-17 15:50:34.329147 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329157 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.329168 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.329182 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.329194 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.329206 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.329217 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.329229 | orchestrator | 2025-09-17 15:50:34.329241 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 15:50:34.329253 | orchestrator | Wednesday 17 September 2025 15:50:07 +0000 (0:00:00.962) 0:07:12.364 *** 2025-09-17 15:50:34.329266 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:34.329278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:34.329290 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:34.329302 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:34.329314 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:34.329326 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:34.329338 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:34.329349 | orchestrator | 2025-09-17 15:50:34.329360 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-17 15:50:34.329371 | orchestrator | Wednesday 17 September 2025 15:50:08 +0000 (0:00:00.660) 0:07:13.024 *** 2025-09-17 15:50:34.329382 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:34.329392 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:34.329403 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:34.329413 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:34.329424 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:34.329434 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:34.329445 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:34.329455 | orchestrator | 2025-09-17 15:50:34.329466 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-17 15:50:34.329477 | orchestrator | Wednesday 17 September 2025 15:50:08 +0000 (0:00:00.415) 0:07:13.440 *** 2025-09-17 15:50:34.329495 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329506 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.329517 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.329527 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.329561 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.329573 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.329583 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.329594 | orchestrator | 2025-09-17 15:50:34.329605 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-17 15:50:34.329615 | orchestrator | Wednesday 17 September 2025 15:50:09 +0000 (0:00:00.566) 0:07:14.006 *** 2025-09-17 15:50:34.329626 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329636 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.329647 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.329657 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.329668 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.329678 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.329689 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.329699 | orchestrator | 2025-09-17 15:50:34.329725 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-17 15:50:34.329736 | orchestrator | Wednesday 17 September 2025 15:50:09 +0000 (0:00:00.441) 0:07:14.448 *** 2025-09-17 15:50:34.329747 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329757 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.329767 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.329778 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.329788 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.329798 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.329809 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.329819 | orchestrator | 2025-09-17 15:50:34.329830 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-17 15:50:34.329841 | orchestrator | Wednesday 17 September 2025 15:50:10 +0000 (0:00:00.462) 0:07:14.911 *** 2025-09-17 15:50:34.329852 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.329862 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.329873 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.329883 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.329894 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.329904 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.329915 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.329925 | orchestrator | 2025-09-17 15:50:34.329936 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-17 15:50:34.329964 | orchestrator | Wednesday 17 September 2025 15:50:16 +0000 (0:00:05.771) 0:07:20.682 *** 2025-09-17 15:50:34.329976 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:34.329987 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:34.329997 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:34.330008 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:34.330076 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:34.330087 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:34.330098 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:34.330108 | orchestrator | 2025-09-17 15:50:34.330119 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-17 15:50:34.330130 | orchestrator | Wednesday 17 September 2025 15:50:16 +0000 (0:00:00.536) 0:07:21.218 *** 2025-09-17 15:50:34.330143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:34.330157 | orchestrator | 2025-09-17 15:50:34.330168 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-17 15:50:34.330179 | orchestrator | Wednesday 17 September 2025 15:50:17 +0000 (0:00:00.964) 0:07:22.182 *** 2025-09-17 15:50:34.330190 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.330200 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.330217 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.330228 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.330238 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.330249 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.330260 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.330270 | orchestrator | 2025-09-17 15:50:34.330281 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-17 15:50:34.330292 | orchestrator | Wednesday 17 September 2025 15:50:19 +0000 (0:00:02.244) 0:07:24.427 *** 2025-09-17 15:50:34.330303 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.330313 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.330324 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.330334 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.330345 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.330355 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.330366 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.330376 | orchestrator | 2025-09-17 15:50:34.330387 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-17 15:50:34.330398 | orchestrator | Wednesday 17 September 2025 15:50:21 +0000 (0:00:01.173) 0:07:25.601 *** 2025-09-17 15:50:34.330409 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:34.330420 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:34.330430 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:34.330441 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:34.330451 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:34.330462 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:34.330472 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:34.330483 | orchestrator | 2025-09-17 15:50:34.330494 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-17 15:50:34.330504 | orchestrator | Wednesday 17 September 2025 15:50:22 +0000 (0:00:01.016) 0:07:26.618 *** 2025-09-17 15:50:34.330515 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330528 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330555 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330567 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330577 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330588 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330598 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-17 15:50:34.330609 | orchestrator | 2025-09-17 15:50:34.330619 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-17 15:50:34.330631 | orchestrator | Wednesday 17 September 2025 15:50:23 +0000 (0:00:01.788) 0:07:28.407 *** 2025-09-17 15:50:34.330642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:34.330653 | orchestrator | 2025-09-17 15:50:34.330664 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-17 15:50:34.330674 | orchestrator | Wednesday 17 September 2025 15:50:24 +0000 (0:00:00.795) 0:07:29.202 *** 2025-09-17 15:50:34.330685 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:34.330696 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:34.330714 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:34.330724 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:34.330735 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:34.330746 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:34.330756 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:34.330766 | orchestrator | 2025-09-17 15:50:34.330777 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-17 15:50:34.330796 | orchestrator | Wednesday 17 September 2025 15:50:34 +0000 (0:00:09.576) 0:07:38.779 *** 2025-09-17 15:50:50.210463 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:50.210632 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:50.210651 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:50.210662 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:50.210673 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:50.210684 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:50.210694 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:50.210705 | orchestrator | 2025-09-17 15:50:50.210719 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-17 15:50:50.210731 | orchestrator | Wednesday 17 September 2025 15:50:36 +0000 (0:00:01.693) 0:07:40.472 *** 2025-09-17 15:50:50.210742 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:50.210753 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:50.210764 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:50.210774 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:50.210785 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:50.210795 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:50.210806 | orchestrator | 2025-09-17 15:50:50.210817 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-17 15:50:50.210828 | orchestrator | Wednesday 17 September 2025 15:50:37 +0000 (0:00:01.395) 0:07:41.867 *** 2025-09-17 15:50:50.210839 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:50.210850 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:50.210861 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:50.210871 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:50.210882 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:50.210893 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:50.210903 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:50.210914 | orchestrator | 2025-09-17 15:50:50.210925 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-17 15:50:50.210935 | orchestrator | 2025-09-17 15:50:50.210946 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-17 15:50:50.210957 | orchestrator | Wednesday 17 September 2025 15:50:38 +0000 (0:00:01.423) 0:07:43.290 *** 2025-09-17 15:50:50.210967 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:50.210978 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:50.210992 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:50.211004 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:50.211015 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:50.211027 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:50.211039 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:50.211052 | orchestrator | 2025-09-17 15:50:50.211064 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-17 15:50:50.211076 | orchestrator | 2025-09-17 15:50:50.211089 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-17 15:50:50.211101 | orchestrator | Wednesday 17 September 2025 15:50:39 +0000 (0:00:00.490) 0:07:43.781 *** 2025-09-17 15:50:50.211112 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:50.211124 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:50.211136 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:50.211147 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:50.211160 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:50.211172 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:50.211185 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:50.211196 | orchestrator | 2025-09-17 15:50:50.211229 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-17 15:50:50.211242 | orchestrator | Wednesday 17 September 2025 15:50:40 +0000 (0:00:01.279) 0:07:45.061 *** 2025-09-17 15:50:50.211254 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:50.211266 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:50.211277 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:50.211289 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:50.211301 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:50.211314 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:50.211373 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:50.211386 | orchestrator | 2025-09-17 15:50:50.211397 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-17 15:50:50.211408 | orchestrator | Wednesday 17 September 2025 15:50:42 +0000 (0:00:01.438) 0:07:46.499 *** 2025-09-17 15:50:50.211418 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:50:50.211429 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:50:50.211439 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:50:50.211450 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:50:50.211460 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:50:50.211470 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:50:50.211481 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:50:50.211491 | orchestrator | 2025-09-17 15:50:50.211502 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-17 15:50:50.211512 | orchestrator | Wednesday 17 September 2025 15:50:42 +0000 (0:00:00.965) 0:07:47.464 *** 2025-09-17 15:50:50.211523 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:50.211533 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:50.211564 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:50.211575 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:50.211586 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:50.211602 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:50.211613 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:50.211623 | orchestrator | 2025-09-17 15:50:50.211634 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-17 15:50:50.211645 | orchestrator | 2025-09-17 15:50:50.211655 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-17 15:50:50.211666 | orchestrator | Wednesday 17 September 2025 15:50:44 +0000 (0:00:01.231) 0:07:48.696 *** 2025-09-17 15:50:50.211677 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:50.211690 | orchestrator | 2025-09-17 15:50:50.211701 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-17 15:50:50.211711 | orchestrator | Wednesday 17 September 2025 15:50:45 +0000 (0:00:01.002) 0:07:49.699 *** 2025-09-17 15:50:50.211722 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:50.211732 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:50.211743 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:50.211753 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:50.211764 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:50.211774 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:50.211785 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:50.211795 | orchestrator | 2025-09-17 15:50:50.211824 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-17 15:50:50.211836 | orchestrator | Wednesday 17 September 2025 15:50:46 +0000 (0:00:00.867) 0:07:50.566 *** 2025-09-17 15:50:50.211847 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:50.211857 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:50.211868 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:50.211878 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:50.211889 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:50.211899 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:50.211910 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:50.211920 | orchestrator | 2025-09-17 15:50:50.211941 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-17 15:50:50.211952 | orchestrator | Wednesday 17 September 2025 15:50:47 +0000 (0:00:01.172) 0:07:51.739 *** 2025-09-17 15:50:50.211963 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:50:50.211974 | orchestrator | 2025-09-17 15:50:50.211984 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-17 15:50:50.211995 | orchestrator | Wednesday 17 September 2025 15:50:48 +0000 (0:00:01.056) 0:07:52.795 *** 2025-09-17 15:50:50.212006 | orchestrator | ok: [testbed-manager] 2025-09-17 15:50:50.212016 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:50:50.212027 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:50:50.212037 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:50:50.212048 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:50:50.212059 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:50:50.212069 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:50:50.212080 | orchestrator | 2025-09-17 15:50:50.212091 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-17 15:50:50.212101 | orchestrator | Wednesday 17 September 2025 15:50:49 +0000 (0:00:00.820) 0:07:53.616 *** 2025-09-17 15:50:50.212112 | orchestrator | changed: [testbed-manager] 2025-09-17 15:50:50.212123 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:50:50.212133 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:50:50.212144 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:50:50.212154 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:50:50.212165 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:50:50.212175 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:50:50.212186 | orchestrator | 2025-09-17 15:50:50.212197 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:50:50.212209 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-17 15:50:50.212221 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-17 15:50:50.212232 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 15:50:50.212242 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 15:50:50.212253 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 15:50:50.212263 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 15:50:50.212274 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-17 15:50:50.212284 | orchestrator | 2025-09-17 15:50:50.212295 | orchestrator | 2025-09-17 15:50:50.212306 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:50:50.212317 | orchestrator | Wednesday 17 September 2025 15:50:50 +0000 (0:00:01.033) 0:07:54.649 *** 2025-09-17 15:50:50.212328 | orchestrator | =============================================================================== 2025-09-17 15:50:50.212339 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.79s 2025-09-17 15:50:50.212349 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.06s 2025-09-17 15:50:50.212360 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.99s 2025-09-17 15:50:50.212371 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.03s 2025-09-17 15:50:50.212389 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.87s 2025-09-17 15:50:50.212400 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.48s 2025-09-17 15:50:50.212411 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.85s 2025-09-17 15:50:50.212421 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.97s 2025-09-17 15:50:50.212432 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.58s 2025-09-17 15:50:50.212443 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2025-09-17 15:50:50.212453 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.56s 2025-09-17 15:50:50.212464 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2025-09-17 15:50:50.212474 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.00s 2025-09-17 15:50:50.212485 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.86s 2025-09-17 15:50:50.212503 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.68s 2025-09-17 15:50:50.906831 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.48s 2025-09-17 15:50:50.906936 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.83s 2025-09-17 15:50:50.906951 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.82s 2025-09-17 15:50:50.906963 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.77s 2025-09-17 15:50:50.906974 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.67s 2025-09-17 15:50:51.186874 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-17 15:50:51.186962 | orchestrator | + osism apply network 2025-09-17 15:51:03.520706 | orchestrator | 2025-09-17 15:51:03 | INFO  | Task ff0eb234-cd4f-430e-abe1-36f74fb836f6 (network) was prepared for execution. 2025-09-17 15:51:03.520814 | orchestrator | 2025-09-17 15:51:03 | INFO  | It takes a moment until task ff0eb234-cd4f-430e-abe1-36f74fb836f6 (network) has been started and output is visible here. 2025-09-17 15:51:32.289042 | orchestrator | 2025-09-17 15:51:32.289162 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-17 15:51:32.289180 | orchestrator | 2025-09-17 15:51:32.289192 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-17 15:51:32.289204 | orchestrator | Wednesday 17 September 2025 15:51:07 +0000 (0:00:00.259) 0:00:00.259 *** 2025-09-17 15:51:32.289215 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.289228 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.289239 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.289250 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.289260 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.289271 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.289282 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.289293 | orchestrator | 2025-09-17 15:51:32.289304 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-17 15:51:32.289314 | orchestrator | Wednesday 17 September 2025 15:51:08 +0000 (0:00:00.720) 0:00:00.980 *** 2025-09-17 15:51:32.289326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:51:32.289340 | orchestrator | 2025-09-17 15:51:32.289351 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-17 15:51:32.289362 | orchestrator | Wednesday 17 September 2025 15:51:09 +0000 (0:00:01.233) 0:00:02.214 *** 2025-09-17 15:51:32.289373 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.289384 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.289395 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.289406 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.289416 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.289453 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.289465 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.289475 | orchestrator | 2025-09-17 15:51:32.289486 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-17 15:51:32.289497 | orchestrator | Wednesday 17 September 2025 15:51:11 +0000 (0:00:02.085) 0:00:04.299 *** 2025-09-17 15:51:32.289507 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.289518 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.289528 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.289566 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.289579 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.289589 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.289600 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.289610 | orchestrator | 2025-09-17 15:51:32.289621 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-17 15:51:32.289632 | orchestrator | Wednesday 17 September 2025 15:51:13 +0000 (0:00:01.800) 0:00:06.100 *** 2025-09-17 15:51:32.289643 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-17 15:51:32.289655 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-17 15:51:32.289665 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-17 15:51:32.289676 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-17 15:51:32.289687 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-17 15:51:32.289698 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-17 15:51:32.289708 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-17 15:51:32.289719 | orchestrator | 2025-09-17 15:51:32.289730 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-17 15:51:32.289757 | orchestrator | Wednesday 17 September 2025 15:51:14 +0000 (0:00:01.025) 0:00:07.126 *** 2025-09-17 15:51:32.289768 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 15:51:32.289779 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 15:51:32.289790 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 15:51:32.289800 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 15:51:32.289811 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 15:51:32.289822 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 15:51:32.289832 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 15:51:32.289843 | orchestrator | 2025-09-17 15:51:32.289854 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-17 15:51:32.289865 | orchestrator | Wednesday 17 September 2025 15:51:18 +0000 (0:00:03.352) 0:00:10.478 *** 2025-09-17 15:51:32.289875 | orchestrator | changed: [testbed-manager] 2025-09-17 15:51:32.289886 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:51:32.289897 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:51:32.289907 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:51:32.289918 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:51:32.289928 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:51:32.289939 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:51:32.289949 | orchestrator | 2025-09-17 15:51:32.289960 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-17 15:51:32.289971 | orchestrator | Wednesday 17 September 2025 15:51:19 +0000 (0:00:01.584) 0:00:12.063 *** 2025-09-17 15:51:32.289982 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 15:51:32.289992 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 15:51:32.290003 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 15:51:32.290014 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 15:51:32.290083 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 15:51:32.290094 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 15:51:32.290104 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 15:51:32.290115 | orchestrator | 2025-09-17 15:51:32.290126 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-17 15:51:32.290137 | orchestrator | Wednesday 17 September 2025 15:51:21 +0000 (0:00:02.071) 0:00:14.135 *** 2025-09-17 15:51:32.290157 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.290168 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.290179 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.290189 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.290200 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.290210 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.290221 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.290232 | orchestrator | 2025-09-17 15:51:32.290243 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-17 15:51:32.290271 | orchestrator | Wednesday 17 September 2025 15:51:22 +0000 (0:00:01.121) 0:00:15.256 *** 2025-09-17 15:51:32.290283 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:51:32.290294 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:32.290304 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:32.290314 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:32.290325 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:32.290335 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:32.290345 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:32.290356 | orchestrator | 2025-09-17 15:51:32.290367 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-17 15:51:32.290378 | orchestrator | Wednesday 17 September 2025 15:51:23 +0000 (0:00:00.653) 0:00:15.909 *** 2025-09-17 15:51:32.290388 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.290399 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.290410 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.290420 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.290431 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.290441 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.290452 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.290462 | orchestrator | 2025-09-17 15:51:32.290473 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-17 15:51:32.290484 | orchestrator | Wednesday 17 September 2025 15:51:25 +0000 (0:00:02.252) 0:00:18.162 *** 2025-09-17 15:51:32.290494 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:32.290505 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:32.290516 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:32.290526 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:32.290554 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:32.290566 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:32.290577 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-17 15:51:32.290589 | orchestrator | 2025-09-17 15:51:32.290600 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-17 15:51:32.290611 | orchestrator | Wednesday 17 September 2025 15:51:26 +0000 (0:00:00.888) 0:00:19.051 *** 2025-09-17 15:51:32.290622 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.290632 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:51:32.290643 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:51:32.290653 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:51:32.290664 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:51:32.290674 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:51:32.290685 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:51:32.290695 | orchestrator | 2025-09-17 15:51:32.290706 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-17 15:51:32.290717 | orchestrator | Wednesday 17 September 2025 15:51:28 +0000 (0:00:01.593) 0:00:20.644 *** 2025-09-17 15:51:32.290728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:51:32.290741 | orchestrator | 2025-09-17 15:51:32.290752 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-17 15:51:32.290769 | orchestrator | Wednesday 17 September 2025 15:51:29 +0000 (0:00:01.187) 0:00:21.832 *** 2025-09-17 15:51:32.290780 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.290791 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.290801 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.290812 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.290828 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.290839 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.290849 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.290860 | orchestrator | 2025-09-17 15:51:32.290871 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-17 15:51:32.290881 | orchestrator | Wednesday 17 September 2025 15:51:30 +0000 (0:00:00.954) 0:00:22.786 *** 2025-09-17 15:51:32.290892 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:32.290902 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:32.290913 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:32.290923 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:32.290934 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:32.290944 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:32.290955 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:32.290965 | orchestrator | 2025-09-17 15:51:32.290976 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-17 15:51:32.290986 | orchestrator | Wednesday 17 September 2025 15:51:31 +0000 (0:00:00.804) 0:00:23.590 *** 2025-09-17 15:51:32.290997 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291008 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291019 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291029 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291040 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291050 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291061 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291071 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291082 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291093 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-17 15:51:32.291103 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291114 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291125 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291135 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-17 15:51:32.291146 | orchestrator | 2025-09-17 15:51:32.291164 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-17 15:51:48.209809 | orchestrator | Wednesday 17 September 2025 15:51:32 +0000 (0:00:01.157) 0:00:24.748 *** 2025-09-17 15:51:48.209907 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:51:48.209922 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:48.209934 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:48.209945 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:48.209956 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:48.209966 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:48.209977 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:48.209988 | orchestrator | 2025-09-17 15:51:48.210000 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-17 15:51:48.210011 | orchestrator | Wednesday 17 September 2025 15:51:32 +0000 (0:00:00.599) 0:00:25.347 *** 2025-09-17 15:51:48.210081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:51:48.210115 | orchestrator | 2025-09-17 15:51:48.210127 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-17 15:51:48.210137 | orchestrator | Wednesday 17 September 2025 15:51:37 +0000 (0:00:04.297) 0:00:29.644 *** 2025-09-17 15:51:48.210150 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210172 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210350 | orchestrator | 2025-09-17 15:51:48.210361 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-17 15:51:48.210374 | orchestrator | Wednesday 17 September 2025 15:51:42 +0000 (0:00:05.313) 0:00:34.958 *** 2025-09-17 15:51:48.210387 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210400 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-17 15:51:48.210489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:48.210574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:54.329694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-17 15:51:54.329820 | orchestrator | 2025-09-17 15:51:54.329843 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-17 15:51:54.329865 | orchestrator | Wednesday 17 September 2025 15:51:48 +0000 (0:00:05.689) 0:00:40.648 *** 2025-09-17 15:51:54.329883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:51:54.329901 | orchestrator | 2025-09-17 15:51:54.329918 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-17 15:51:54.329938 | orchestrator | Wednesday 17 September 2025 15:51:49 +0000 (0:00:01.409) 0:00:42.057 *** 2025-09-17 15:51:54.329958 | orchestrator | ok: [testbed-manager] 2025-09-17 15:51:54.329977 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:51:54.330085 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:51:54.330105 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:51:54.330124 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:51:54.330145 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:51:54.330163 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:51:54.330181 | orchestrator | 2025-09-17 15:51:54.330201 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-17 15:51:54.330221 | orchestrator | Wednesday 17 September 2025 15:51:50 +0000 (0:00:01.115) 0:00:43.172 *** 2025-09-17 15:51:54.330240 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330260 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330279 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330300 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330321 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330340 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330361 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330382 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330402 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:51:54.330423 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330443 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330472 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330492 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330511 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:54.330530 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330578 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330597 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330648 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330668 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:54.330687 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330706 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330726 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330745 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330762 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:54.330781 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330800 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330820 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330839 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330858 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:54.330875 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:54.330890 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-17 15:51:54.330905 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-17 15:51:54.330925 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-17 15:51:54.330944 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-17 15:51:54.330962 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:54.330981 | orchestrator | 2025-09-17 15:51:54.331000 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-17 15:51:54.331043 | orchestrator | Wednesday 17 September 2025 15:51:52 +0000 (0:00:01.974) 0:00:45.147 *** 2025-09-17 15:51:54.331063 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:51:54.331083 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:54.331103 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:54.331123 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:54.331142 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:54.331161 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:54.331181 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:54.331200 | orchestrator | 2025-09-17 15:51:54.331219 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-17 15:51:54.331239 | orchestrator | Wednesday 17 September 2025 15:51:53 +0000 (0:00:00.619) 0:00:45.766 *** 2025-09-17 15:51:54.331259 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:51:54.331278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:51:54.331297 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:51:54.331317 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:51:54.331335 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:51:54.331352 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:51:54.331371 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:51:54.331389 | orchestrator | 2025-09-17 15:51:54.331406 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:51:54.331426 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 15:51:54.331447 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331466 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331486 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331530 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331628 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331649 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 15:51:54.331668 | orchestrator | 2025-09-17 15:51:54.331687 | orchestrator | 2025-09-17 15:51:54.331706 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:51:54.331727 | orchestrator | Wednesday 17 September 2025 15:51:54 +0000 (0:00:00.711) 0:00:46.477 *** 2025-09-17 15:51:54.331757 | orchestrator | =============================================================================== 2025-09-17 15:51:54.331777 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.69s 2025-09-17 15:51:54.331798 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.31s 2025-09-17 15:51:54.331817 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.30s 2025-09-17 15:51:54.331838 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.35s 2025-09-17 15:51:54.331857 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.25s 2025-09-17 15:51:54.331873 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2025-09-17 15:51:54.331888 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.07s 2025-09-17 15:51:54.331903 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.97s 2025-09-17 15:51:54.331922 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2025-09-17 15:51:54.331939 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.59s 2025-09-17 15:51:54.331958 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.58s 2025-09-17 15:51:54.331974 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.41s 2025-09-17 15:51:54.331992 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.23s 2025-09-17 15:51:54.332012 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2025-09-17 15:51:54.332032 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.16s 2025-09-17 15:51:54.332052 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-09-17 15:51:54.332069 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2025-09-17 15:51:54.332085 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2025-09-17 15:51:54.332101 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-09-17 15:51:54.332116 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2025-09-17 15:51:54.592577 | orchestrator | + osism apply wireguard 2025-09-17 15:52:06.601482 | orchestrator | 2025-09-17 15:52:06 | INFO  | Task ef9a3a89-6cc1-4287-be2e-b77bff29f503 (wireguard) was prepared for execution. 2025-09-17 15:52:06.601641 | orchestrator | 2025-09-17 15:52:06 | INFO  | It takes a moment until task ef9a3a89-6cc1-4287-be2e-b77bff29f503 (wireguard) has been started and output is visible here. 2025-09-17 15:52:25.482688 | orchestrator | 2025-09-17 15:52:25.482797 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-17 15:52:25.482812 | orchestrator | 2025-09-17 15:52:25.482822 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-17 15:52:25.482833 | orchestrator | Wednesday 17 September 2025 15:52:10 +0000 (0:00:00.223) 0:00:00.223 *** 2025-09-17 15:52:25.482843 | orchestrator | ok: [testbed-manager] 2025-09-17 15:52:25.482877 | orchestrator | 2025-09-17 15:52:25.482888 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-17 15:52:25.482897 | orchestrator | Wednesday 17 September 2025 15:52:12 +0000 (0:00:01.442) 0:00:01.666 *** 2025-09-17 15:52:25.482907 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.482917 | orchestrator | 2025-09-17 15:52:25.482927 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-17 15:52:25.482937 | orchestrator | Wednesday 17 September 2025 15:52:18 +0000 (0:00:06.133) 0:00:07.800 *** 2025-09-17 15:52:25.482947 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.482956 | orchestrator | 2025-09-17 15:52:25.482966 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-17 15:52:25.482975 | orchestrator | Wednesday 17 September 2025 15:52:18 +0000 (0:00:00.558) 0:00:08.358 *** 2025-09-17 15:52:25.482985 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.482994 | orchestrator | 2025-09-17 15:52:25.483004 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-17 15:52:25.483013 | orchestrator | Wednesday 17 September 2025 15:52:19 +0000 (0:00:00.442) 0:00:08.801 *** 2025-09-17 15:52:25.483022 | orchestrator | ok: [testbed-manager] 2025-09-17 15:52:25.483032 | orchestrator | 2025-09-17 15:52:25.483041 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-17 15:52:25.483051 | orchestrator | Wednesday 17 September 2025 15:52:19 +0000 (0:00:00.538) 0:00:09.340 *** 2025-09-17 15:52:25.483060 | orchestrator | ok: [testbed-manager] 2025-09-17 15:52:25.483069 | orchestrator | 2025-09-17 15:52:25.483079 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-17 15:52:25.483088 | orchestrator | Wednesday 17 September 2025 15:52:20 +0000 (0:00:00.510) 0:00:09.850 *** 2025-09-17 15:52:25.483097 | orchestrator | ok: [testbed-manager] 2025-09-17 15:52:25.483107 | orchestrator | 2025-09-17 15:52:25.483116 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-17 15:52:25.483126 | orchestrator | Wednesday 17 September 2025 15:52:20 +0000 (0:00:00.396) 0:00:10.246 *** 2025-09-17 15:52:25.483135 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.483144 | orchestrator | 2025-09-17 15:52:25.483153 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-17 15:52:25.483163 | orchestrator | Wednesday 17 September 2025 15:52:21 +0000 (0:00:01.154) 0:00:11.401 *** 2025-09-17 15:52:25.483172 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-17 15:52:25.483182 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.483192 | orchestrator | 2025-09-17 15:52:25.483201 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-17 15:52:25.483211 | orchestrator | Wednesday 17 September 2025 15:52:22 +0000 (0:00:00.886) 0:00:12.287 *** 2025-09-17 15:52:25.483233 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.483245 | orchestrator | 2025-09-17 15:52:25.483256 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-17 15:52:25.483267 | orchestrator | Wednesday 17 September 2025 15:52:24 +0000 (0:00:01.651) 0:00:13.939 *** 2025-09-17 15:52:25.483278 | orchestrator | changed: [testbed-manager] 2025-09-17 15:52:25.483288 | orchestrator | 2025-09-17 15:52:25.483299 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:52:25.483310 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:52:25.483322 | orchestrator | 2025-09-17 15:52:25.483333 | orchestrator | 2025-09-17 15:52:25.483344 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:52:25.483355 | orchestrator | Wednesday 17 September 2025 15:52:25 +0000 (0:00:00.893) 0:00:14.832 *** 2025-09-17 15:52:25.483365 | orchestrator | =============================================================================== 2025-09-17 15:52:25.483376 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.13s 2025-09-17 15:52:25.483386 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-09-17 15:52:25.483406 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-09-17 15:52:25.483417 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-09-17 15:52:25.483428 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2025-09-17 15:52:25.483438 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.89s 2025-09-17 15:52:25.483449 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-17 15:52:25.483460 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-09-17 15:52:25.483471 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-09-17 15:52:25.483482 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-09-17 15:52:25.483493 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-09-17 15:52:25.721782 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-17 15:52:25.761879 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-17 15:52:25.761950 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-17 15:52:25.837276 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 198 0 --:--:-- --:--:-- --:--:-- 200 2025-09-17 15:52:25.849232 | orchestrator | + osism apply --environment custom workarounds 2025-09-17 15:52:27.642321 | orchestrator | 2025-09-17 15:52:27 | INFO  | Trying to run play workarounds in environment custom 2025-09-17 15:52:37.768487 | orchestrator | 2025-09-17 15:52:37 | INFO  | Task 4f40b85e-3aa3-4e30-8592-c679435bc80b (workarounds) was prepared for execution. 2025-09-17 15:52:37.768634 | orchestrator | 2025-09-17 15:52:37 | INFO  | It takes a moment until task 4f40b85e-3aa3-4e30-8592-c679435bc80b (workarounds) has been started and output is visible here. 2025-09-17 15:53:01.970137 | orchestrator | 2025-09-17 15:53:01.970284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 15:53:01.970316 | orchestrator | 2025-09-17 15:53:01.970338 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-17 15:53:01.970359 | orchestrator | Wednesday 17 September 2025 15:52:41 +0000 (0:00:00.151) 0:00:00.151 *** 2025-09-17 15:53:01.970380 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970399 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970417 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970436 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970454 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970471 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970490 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-17 15:53:01.970510 | orchestrator | 2025-09-17 15:53:01.970565 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-17 15:53:01.970584 | orchestrator | 2025-09-17 15:53:01.970603 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-17 15:53:01.970623 | orchestrator | Wednesday 17 September 2025 15:52:42 +0000 (0:00:00.697) 0:00:00.848 *** 2025-09-17 15:53:01.970644 | orchestrator | ok: [testbed-manager] 2025-09-17 15:53:01.970664 | orchestrator | 2025-09-17 15:53:01.970684 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-17 15:53:01.970703 | orchestrator | 2025-09-17 15:53:01.970723 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-17 15:53:01.970742 | orchestrator | Wednesday 17 September 2025 15:52:44 +0000 (0:00:02.236) 0:00:03.084 *** 2025-09-17 15:53:01.970794 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:53:01.970816 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:53:01.970836 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:53:01.970855 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:53:01.970872 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:53:01.970892 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:53:01.970911 | orchestrator | 2025-09-17 15:53:01.970930 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-17 15:53:01.970948 | orchestrator | 2025-09-17 15:53:01.970964 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-17 15:53:01.970996 | orchestrator | Wednesday 17 September 2025 15:52:46 +0000 (0:00:01.822) 0:00:04.907 *** 2025-09-17 15:53:01.971017 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971030 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971041 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971052 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971063 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971073 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-17 15:53:01.971084 | orchestrator | 2025-09-17 15:53:01.971095 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-17 15:53:01.971105 | orchestrator | Wednesday 17 September 2025 15:52:47 +0000 (0:00:01.402) 0:00:06.309 *** 2025-09-17 15:53:01.971116 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:53:01.971127 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:53:01.971138 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:53:01.971148 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:53:01.971159 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:53:01.971169 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:53:01.971180 | orchestrator | 2025-09-17 15:53:01.971191 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-17 15:53:01.971202 | orchestrator | Wednesday 17 September 2025 15:52:51 +0000 (0:00:03.799) 0:00:10.108 *** 2025-09-17 15:53:01.971213 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:53:01.971223 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:53:01.971234 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:53:01.971246 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:53:01.971256 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:53:01.971267 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:53:01.971277 | orchestrator | 2025-09-17 15:53:01.971288 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-17 15:53:01.971299 | orchestrator | 2025-09-17 15:53:01.971310 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-17 15:53:01.971320 | orchestrator | Wednesday 17 September 2025 15:52:52 +0000 (0:00:00.636) 0:00:10.745 *** 2025-09-17 15:53:01.971331 | orchestrator | changed: [testbed-manager] 2025-09-17 15:53:01.971342 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:53:01.971352 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:53:01.971363 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:53:01.971373 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:53:01.971384 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:53:01.971395 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:53:01.971405 | orchestrator | 2025-09-17 15:53:01.971416 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-17 15:53:01.971427 | orchestrator | Wednesday 17 September 2025 15:52:53 +0000 (0:00:01.737) 0:00:12.482 *** 2025-09-17 15:53:01.971438 | orchestrator | changed: [testbed-manager] 2025-09-17 15:53:01.971460 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:53:01.971470 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:53:01.971482 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:53:01.971492 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:53:01.971503 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:53:01.971563 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:53:01.971576 | orchestrator | 2025-09-17 15:53:01.971587 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-17 15:53:01.971598 | orchestrator | Wednesday 17 September 2025 15:52:55 +0000 (0:00:01.628) 0:00:14.111 *** 2025-09-17 15:53:01.971609 | orchestrator | ok: [testbed-manager] 2025-09-17 15:53:01.971619 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:53:01.971630 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:53:01.971641 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:53:01.971651 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:53:01.971662 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:53:01.971672 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:53:01.971683 | orchestrator | 2025-09-17 15:53:01.971694 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-17 15:53:01.971705 | orchestrator | Wednesday 17 September 2025 15:52:56 +0000 (0:00:01.504) 0:00:15.615 *** 2025-09-17 15:53:01.971715 | orchestrator | changed: [testbed-manager] 2025-09-17 15:53:01.971726 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:53:01.971737 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:53:01.971747 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:53:01.971758 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:53:01.971768 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:53:01.971779 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:53:01.971789 | orchestrator | 2025-09-17 15:53:01.971800 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-17 15:53:01.971811 | orchestrator | Wednesday 17 September 2025 15:52:58 +0000 (0:00:01.748) 0:00:17.364 *** 2025-09-17 15:53:01.971822 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:53:01.971832 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:53:01.971843 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:53:01.971853 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:53:01.971864 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:53:01.971875 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:53:01.971886 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:53:01.971896 | orchestrator | 2025-09-17 15:53:01.971907 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-17 15:53:01.971917 | orchestrator | 2025-09-17 15:53:01.971928 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-17 15:53:01.971939 | orchestrator | Wednesday 17 September 2025 15:52:59 +0000 (0:00:00.584) 0:00:17.949 *** 2025-09-17 15:53:01.971950 | orchestrator | ok: [testbed-manager] 2025-09-17 15:53:01.971960 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:53:01.971971 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:53:01.971981 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:53:01.971992 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:53:01.972008 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:53:01.972020 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:53:01.972031 | orchestrator | 2025-09-17 15:53:01.972042 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:53:01.972054 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:53:01.972066 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972078 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972088 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972108 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972119 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972130 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:01.972141 | orchestrator | 2025-09-17 15:53:01.972152 | orchestrator | 2025-09-17 15:53:01.972163 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:53:01.972174 | orchestrator | Wednesday 17 September 2025 15:53:01 +0000 (0:00:02.724) 0:00:20.673 *** 2025-09-17 15:53:01.972185 | orchestrator | =============================================================================== 2025-09-17 15:53:01.972195 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2025-09-17 15:53:01.972206 | orchestrator | Install python3-docker -------------------------------------------------- 2.72s 2025-09-17 15:53:01.972217 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2025-09-17 15:53:01.972228 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2025-09-17 15:53:01.972238 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.75s 2025-09-17 15:53:01.972249 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2025-09-17 15:53:01.972260 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-09-17 15:53:01.972271 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-09-17 15:53:01.972282 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.40s 2025-09-17 15:53:01.972293 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2025-09-17 15:53:01.972303 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2025-09-17 15:53:01.972322 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.58s 2025-09-17 15:53:02.567039 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-17 15:53:14.497363 | orchestrator | 2025-09-17 15:53:14 | INFO  | Task 4101b5e5-2c0a-4bb0-a895-3f425dc8c90e (reboot) was prepared for execution. 2025-09-17 15:53:14.497466 | orchestrator | 2025-09-17 15:53:14 | INFO  | It takes a moment until task 4101b5e5-2c0a-4bb0-a895-3f425dc8c90e (reboot) has been started and output is visible here. 2025-09-17 15:53:24.209884 | orchestrator | 2025-09-17 15:53:24.209985 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.209998 | orchestrator | 2025-09-17 15:53:24.210004 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210012 | orchestrator | Wednesday 17 September 2025 15:53:18 +0000 (0:00:00.207) 0:00:00.207 *** 2025-09-17 15:53:24.210068 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:53:24.210075 | orchestrator | 2025-09-17 15:53:24.210082 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210088 | orchestrator | Wednesday 17 September 2025 15:53:18 +0000 (0:00:00.098) 0:00:00.305 *** 2025-09-17 15:53:24.210094 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:53:24.210100 | orchestrator | 2025-09-17 15:53:24.210107 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210114 | orchestrator | Wednesday 17 September 2025 15:53:19 +0000 (0:00:00.954) 0:00:01.259 *** 2025-09-17 15:53:24.210120 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:53:24.210127 | orchestrator | 2025-09-17 15:53:24.210134 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.210178 | orchestrator | 2025-09-17 15:53:24.210187 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210194 | orchestrator | Wednesday 17 September 2025 15:53:19 +0000 (0:00:00.121) 0:00:01.381 *** 2025-09-17 15:53:24.210200 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:53:24.210207 | orchestrator | 2025-09-17 15:53:24.210213 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210219 | orchestrator | Wednesday 17 September 2025 15:53:19 +0000 (0:00:00.107) 0:00:01.489 *** 2025-09-17 15:53:24.210225 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:53:24.210230 | orchestrator | 2025-09-17 15:53:24.210237 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210256 | orchestrator | Wednesday 17 September 2025 15:53:20 +0000 (0:00:00.664) 0:00:02.154 *** 2025-09-17 15:53:24.210262 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:53:24.210268 | orchestrator | 2025-09-17 15:53:24.210274 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.210280 | orchestrator | 2025-09-17 15:53:24.210285 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210291 | orchestrator | Wednesday 17 September 2025 15:53:20 +0000 (0:00:00.117) 0:00:02.271 *** 2025-09-17 15:53:24.210296 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:53:24.210302 | orchestrator | 2025-09-17 15:53:24.210307 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210312 | orchestrator | Wednesday 17 September 2025 15:53:20 +0000 (0:00:00.187) 0:00:02.459 *** 2025-09-17 15:53:24.210318 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:53:24.210323 | orchestrator | 2025-09-17 15:53:24.210332 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210338 | orchestrator | Wednesday 17 September 2025 15:53:21 +0000 (0:00:00.607) 0:00:03.067 *** 2025-09-17 15:53:24.210344 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:53:24.210350 | orchestrator | 2025-09-17 15:53:24.210356 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.210362 | orchestrator | 2025-09-17 15:53:24.210369 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210375 | orchestrator | Wednesday 17 September 2025 15:53:21 +0000 (0:00:00.108) 0:00:03.175 *** 2025-09-17 15:53:24.210382 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:53:24.210389 | orchestrator | 2025-09-17 15:53:24.210396 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210402 | orchestrator | Wednesday 17 September 2025 15:53:21 +0000 (0:00:00.105) 0:00:03.280 *** 2025-09-17 15:53:24.210409 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:53:24.210416 | orchestrator | 2025-09-17 15:53:24.210422 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210429 | orchestrator | Wednesday 17 September 2025 15:53:22 +0000 (0:00:00.619) 0:00:03.900 *** 2025-09-17 15:53:24.210436 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:53:24.210442 | orchestrator | 2025-09-17 15:53:24.210449 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.210455 | orchestrator | 2025-09-17 15:53:24.210462 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210468 | orchestrator | Wednesday 17 September 2025 15:53:22 +0000 (0:00:00.123) 0:00:04.024 *** 2025-09-17 15:53:24.210475 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:53:24.210480 | orchestrator | 2025-09-17 15:53:24.210486 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210492 | orchestrator | Wednesday 17 September 2025 15:53:22 +0000 (0:00:00.098) 0:00:04.123 *** 2025-09-17 15:53:24.210498 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:53:24.210520 | orchestrator | 2025-09-17 15:53:24.210526 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210533 | orchestrator | Wednesday 17 September 2025 15:53:23 +0000 (0:00:00.635) 0:00:04.759 *** 2025-09-17 15:53:24.210558 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:53:24.210564 | orchestrator | 2025-09-17 15:53:24.210571 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-17 15:53:24.210578 | orchestrator | 2025-09-17 15:53:24.210584 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-17 15:53:24.210591 | orchestrator | Wednesday 17 September 2025 15:53:23 +0000 (0:00:00.114) 0:00:04.873 *** 2025-09-17 15:53:24.210598 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:53:24.210606 | orchestrator | 2025-09-17 15:53:24.210621 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-17 15:53:24.210628 | orchestrator | Wednesday 17 September 2025 15:53:23 +0000 (0:00:00.107) 0:00:04.981 *** 2025-09-17 15:53:24.210634 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:53:24.210640 | orchestrator | 2025-09-17 15:53:24.210645 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-17 15:53:24.210652 | orchestrator | Wednesday 17 September 2025 15:53:23 +0000 (0:00:00.635) 0:00:05.617 *** 2025-09-17 15:53:24.210672 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:53:24.210676 | orchestrator | 2025-09-17 15:53:24.210680 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:53:24.210691 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210697 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210701 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210704 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210708 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210718 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:53:24.210722 | orchestrator | 2025-09-17 15:53:24.210726 | orchestrator | 2025-09-17 15:53:24.210730 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:53:24.210733 | orchestrator | Wednesday 17 September 2025 15:53:23 +0000 (0:00:00.040) 0:00:05.657 *** 2025-09-17 15:53:24.210738 | orchestrator | =============================================================================== 2025-09-17 15:53:24.210742 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.12s 2025-09-17 15:53:24.210746 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-09-17 15:53:24.210750 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-09-17 15:53:24.466082 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-17 15:53:36.467834 | orchestrator | 2025-09-17 15:53:36 | INFO  | Task 3bad240d-7223-45c3-845d-64f99bf53992 (wait-for-connection) was prepared for execution. 2025-09-17 15:53:36.467953 | orchestrator | 2025-09-17 15:53:36 | INFO  | It takes a moment until task 3bad240d-7223-45c3-845d-64f99bf53992 (wait-for-connection) has been started and output is visible here. 2025-09-17 15:53:52.215364 | orchestrator | 2025-09-17 15:53:52.215483 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-17 15:53:52.215555 | orchestrator | 2025-09-17 15:53:52.215568 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-17 15:53:52.215579 | orchestrator | Wednesday 17 September 2025 15:53:40 +0000 (0:00:00.214) 0:00:00.214 *** 2025-09-17 15:53:52.215618 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:53:52.215631 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:53:52.215641 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:53:52.215652 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:53:52.215662 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:53:52.215673 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:53:52.215683 | orchestrator | 2025-09-17 15:53:52.215694 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:53:52.215706 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215735 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215747 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215758 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215769 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215779 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:53:52.215790 | orchestrator | 2025-09-17 15:53:52.215801 | orchestrator | 2025-09-17 15:53:52.215812 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:53:52.215823 | orchestrator | Wednesday 17 September 2025 15:53:51 +0000 (0:00:11.620) 0:00:11.835 *** 2025-09-17 15:53:52.215833 | orchestrator | =============================================================================== 2025-09-17 15:53:52.215844 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2025-09-17 15:53:52.479303 | orchestrator | + osism apply hddtemp 2025-09-17 15:54:04.363800 | orchestrator | 2025-09-17 15:54:04 | INFO  | Task 4265bfcd-4bef-4d5d-b190-74766df47021 (hddtemp) was prepared for execution. 2025-09-17 15:54:04.363877 | orchestrator | 2025-09-17 15:54:04 | INFO  | It takes a moment until task 4265bfcd-4bef-4d5d-b190-74766df47021 (hddtemp) has been started and output is visible here. 2025-09-17 15:54:33.121336 | orchestrator | 2025-09-17 15:54:33.121448 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-17 15:54:33.121466 | orchestrator | 2025-09-17 15:54:33.121532 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-17 15:54:33.121546 | orchestrator | Wednesday 17 September 2025 15:54:08 +0000 (0:00:00.264) 0:00:00.264 *** 2025-09-17 15:54:33.121557 | orchestrator | ok: [testbed-manager] 2025-09-17 15:54:33.121569 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:54:33.121580 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:54:33.121591 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:54:33.121602 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:54:33.121613 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:54:33.121624 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:54:33.121635 | orchestrator | 2025-09-17 15:54:33.121646 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-17 15:54:33.121657 | orchestrator | Wednesday 17 September 2025 15:54:08 +0000 (0:00:00.662) 0:00:00.926 *** 2025-09-17 15:54:33.121670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:54:33.121684 | orchestrator | 2025-09-17 15:54:33.121695 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-17 15:54:33.121706 | orchestrator | Wednesday 17 September 2025 15:54:10 +0000 (0:00:01.131) 0:00:02.057 *** 2025-09-17 15:54:33.121717 | orchestrator | ok: [testbed-manager] 2025-09-17 15:54:33.121751 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:54:33.121763 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:54:33.121774 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:54:33.121784 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:54:33.121795 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:54:33.121807 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:54:33.121817 | orchestrator | 2025-09-17 15:54:33.121828 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-17 15:54:33.121854 | orchestrator | Wednesday 17 September 2025 15:54:11 +0000 (0:00:01.875) 0:00:03.933 *** 2025-09-17 15:54:33.121865 | orchestrator | changed: [testbed-manager] 2025-09-17 15:54:33.121876 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:54:33.121887 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:54:33.121898 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:54:33.121908 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:54:33.121919 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:54:33.121930 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:54:33.121941 | orchestrator | 2025-09-17 15:54:33.121952 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-17 15:54:33.121963 | orchestrator | Wednesday 17 September 2025 15:54:13 +0000 (0:00:01.112) 0:00:05.046 *** 2025-09-17 15:54:33.121973 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:54:33.121984 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:54:33.121995 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:54:33.122005 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:54:33.122067 | orchestrator | ok: [testbed-manager] 2025-09-17 15:54:33.122082 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:54:33.122093 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:54:33.122103 | orchestrator | 2025-09-17 15:54:33.122114 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-17 15:54:33.122125 | orchestrator | Wednesday 17 September 2025 15:54:15 +0000 (0:00:02.186) 0:00:07.232 *** 2025-09-17 15:54:33.122136 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:54:33.122146 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:54:33.122157 | orchestrator | changed: [testbed-manager] 2025-09-17 15:54:33.122168 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:54:33.122178 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:54:33.122189 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:54:33.122199 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:54:33.122210 | orchestrator | 2025-09-17 15:54:33.122221 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-17 15:54:33.122231 | orchestrator | Wednesday 17 September 2025 15:54:15 +0000 (0:00:00.751) 0:00:07.983 *** 2025-09-17 15:54:33.122242 | orchestrator | changed: [testbed-manager] 2025-09-17 15:54:33.122253 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:54:33.122263 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:54:33.122274 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:54:33.122284 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:54:33.122295 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:54:33.122306 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:54:33.122316 | orchestrator | 2025-09-17 15:54:33.122327 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-17 15:54:33.122337 | orchestrator | Wednesday 17 September 2025 15:54:29 +0000 (0:00:13.503) 0:00:21.487 *** 2025-09-17 15:54:33.122349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 15:54:33.122360 | orchestrator | 2025-09-17 15:54:33.122371 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-17 15:54:33.122381 | orchestrator | Wednesday 17 September 2025 15:54:30 +0000 (0:00:01.382) 0:00:22.869 *** 2025-09-17 15:54:33.122392 | orchestrator | changed: [testbed-manager] 2025-09-17 15:54:33.122402 | orchestrator | changed: [testbed-node-1] 2025-09-17 15:54:33.122422 | orchestrator | changed: [testbed-node-2] 2025-09-17 15:54:33.122433 | orchestrator | changed: [testbed-node-0] 2025-09-17 15:54:33.122444 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:54:33.122454 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:54:33.122465 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:54:33.122476 | orchestrator | 2025-09-17 15:54:33.122511 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:54:33.122523 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 15:54:33.122554 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122566 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122577 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122587 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122598 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122609 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:54:33.122620 | orchestrator | 2025-09-17 15:54:33.122631 | orchestrator | 2025-09-17 15:54:33.122641 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:54:33.122652 | orchestrator | Wednesday 17 September 2025 15:54:32 +0000 (0:00:01.882) 0:00:24.751 *** 2025-09-17 15:54:33.122663 | orchestrator | =============================================================================== 2025-09-17 15:54:33.122674 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.50s 2025-09-17 15:54:33.122684 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.19s 2025-09-17 15:54:33.122695 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2025-09-17 15:54:33.122712 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.88s 2025-09-17 15:54:33.122723 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.38s 2025-09-17 15:54:33.122734 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.13s 2025-09-17 15:54:33.122744 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.11s 2025-09-17 15:54:33.122755 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.75s 2025-09-17 15:54:33.122766 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.66s 2025-09-17 15:54:33.389570 | orchestrator | ++ semver 9.2.0 7.1.1 2025-09-17 15:54:33.451669 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-17 15:54:33.451755 | orchestrator | + sudo systemctl restart manager.service 2025-09-17 15:54:46.942591 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-17 15:54:46.942705 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-17 15:54:46.942721 | orchestrator | + local max_attempts=60 2025-09-17 15:54:46.942736 | orchestrator | + local name=ceph-ansible 2025-09-17 15:54:46.942748 | orchestrator | + local attempt_num=1 2025-09-17 15:54:46.942762 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:54:46.970280 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:54:46.970346 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:54:46.970351 | orchestrator | + sleep 5 2025-09-17 15:54:51.975775 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:54:52.015012 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:54:52.015074 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:54:52.015083 | orchestrator | + sleep 5 2025-09-17 15:54:57.018163 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:54:57.051823 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:54:57.051913 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:54:57.051927 | orchestrator | + sleep 5 2025-09-17 15:55:02.056466 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:02.095033 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:02.095111 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:02.095125 | orchestrator | + sleep 5 2025-09-17 15:55:07.098335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:07.132040 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:07.132112 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:07.132126 | orchestrator | + sleep 5 2025-09-17 15:55:12.135757 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:12.171647 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:12.171721 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:12.171735 | orchestrator | + sleep 5 2025-09-17 15:55:17.177251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:17.214456 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:17.214551 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:17.214562 | orchestrator | + sleep 5 2025-09-17 15:55:22.220513 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:22.242389 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:22.242526 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:22.242542 | orchestrator | + sleep 5 2025-09-17 15:55:27.247659 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:27.372133 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:27.372220 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:27.372236 | orchestrator | + sleep 5 2025-09-17 15:55:32.375298 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:32.417944 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:32.761572 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:32.761655 | orchestrator | + sleep 5 2025-09-17 15:55:37.423074 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:37.464638 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:37.464731 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:37.464745 | orchestrator | + sleep 5 2025-09-17 15:55:42.470008 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:42.510955 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:42.511055 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:42.511071 | orchestrator | + sleep 5 2025-09-17 15:55:47.514760 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:47.548640 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:47.548692 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-17 15:55:47.548706 | orchestrator | + sleep 5 2025-09-17 15:55:52.553170 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-17 15:55:52.590439 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:52.590700 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-17 15:55:52.590725 | orchestrator | + local max_attempts=60 2025-09-17 15:55:52.590737 | orchestrator | + local name=kolla-ansible 2025-09-17 15:55:52.590749 | orchestrator | + local attempt_num=1 2025-09-17 15:55:52.591182 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-17 15:55:52.627449 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:52.627676 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-17 15:55:52.627776 | orchestrator | + local max_attempts=60 2025-09-17 15:55:52.628005 | orchestrator | + local name=osism-ansible 2025-09-17 15:55:52.628025 | orchestrator | + local attempt_num=1 2025-09-17 15:55:52.629236 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-17 15:55:52.665443 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-17 15:55:52.665513 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-17 15:55:52.665555 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-17 15:55:52.854775 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-17 15:55:52.998763 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-17 15:55:53.136310 | orchestrator | ARA in osism-ansible already disabled. 2025-09-17 15:55:53.291263 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-17 15:55:53.291588 | orchestrator | + osism apply gather-facts 2025-09-17 15:56:05.197822 | orchestrator | 2025-09-17 15:56:05 | INFO  | Task a45b1741-b2b9-499c-b844-184e05b5bec0 (gather-facts) was prepared for execution. 2025-09-17 15:56:05.197922 | orchestrator | 2025-09-17 15:56:05 | INFO  | It takes a moment until task a45b1741-b2b9-499c-b844-184e05b5bec0 (gather-facts) has been started and output is visible here. 2025-09-17 15:56:19.260574 | orchestrator | 2025-09-17 15:56:19.260674 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 15:56:19.260689 | orchestrator | 2025-09-17 15:56:19.260701 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 15:56:19.260729 | orchestrator | Wednesday 17 September 2025 15:56:09 +0000 (0:00:00.234) 0:00:00.234 *** 2025-09-17 15:56:19.260740 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:56:19.260752 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:56:19.260764 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:56:19.260775 | orchestrator | ok: [testbed-manager] 2025-09-17 15:56:19.260786 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:56:19.260797 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:56:19.260807 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:56:19.260818 | orchestrator | 2025-09-17 15:56:19.260830 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 15:56:19.260840 | orchestrator | 2025-09-17 15:56:19.260851 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 15:56:19.260862 | orchestrator | Wednesday 17 September 2025 15:56:18 +0000 (0:00:09.423) 0:00:09.658 *** 2025-09-17 15:56:19.260873 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:56:19.260903 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:56:19.260915 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:56:19.260926 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:56:19.260937 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:56:19.260948 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:56:19.260959 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:56:19.260969 | orchestrator | 2025-09-17 15:56:19.260980 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:56:19.260992 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261004 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261015 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261026 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261036 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261047 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261058 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 15:56:19.261069 | orchestrator | 2025-09-17 15:56:19.261080 | orchestrator | 2025-09-17 15:56:19.261092 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:56:19.261105 | orchestrator | Wednesday 17 September 2025 15:56:19 +0000 (0:00:00.446) 0:00:10.104 *** 2025-09-17 15:56:19.261136 | orchestrator | =============================================================================== 2025-09-17 15:56:19.261149 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.42s 2025-09-17 15:56:19.261161 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-09-17 15:56:19.435969 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-17 15:56:19.452983 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-17 15:56:19.468759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-17 15:56:19.486759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-17 15:56:19.504702 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-17 15:56:19.522299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-17 15:56:19.543416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-17 15:56:19.561709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-17 15:56:19.579293 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-17 15:56:19.599867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-17 15:56:19.622581 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-17 15:56:19.641002 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-17 15:56:19.660864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-17 15:56:19.679524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-17 15:56:19.695130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-17 15:56:19.707875 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-17 15:56:19.720768 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-17 15:56:19.733587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-17 15:56:19.753118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-17 15:56:19.767310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-17 15:56:19.782400 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-17 15:56:19.882290 | orchestrator | ok: Runtime: 0:22:41.129575 2025-09-17 15:56:19.982690 | 2025-09-17 15:56:19.982826 | TASK [Deploy services] 2025-09-17 15:56:20.513301 | orchestrator | skipping: Conditional result was False 2025-09-17 15:56:20.531849 | 2025-09-17 15:56:20.532017 | TASK [Deploy in a nutshell] 2025-09-17 15:56:21.197717 | orchestrator | + set -e 2025-09-17 15:56:21.197895 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 15:56:21.197920 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 15:56:21.197940 | orchestrator | ++ INTERACTIVE=false 2025-09-17 15:56:21.197954 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 15:56:21.197966 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 15:56:21.197980 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 15:56:21.198086 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 15:56:21.198120 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 15:56:21.198135 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 15:56:21.198152 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 15:56:21.198172 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 15:56:21.198202 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 15:56:21.198222 | orchestrator | 2025-09-17 15:56:21.198289 | orchestrator | # PULL IMAGES 2025-09-17 15:56:21.198302 | orchestrator | 2025-09-17 15:56:21.198313 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 15:56:21.198327 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 15:56:21.198338 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 15:56:21.198350 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 15:56:21.198361 | orchestrator | ++ export ARA=false 2025-09-17 15:56:21.198372 | orchestrator | ++ ARA=false 2025-09-17 15:56:21.198382 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 15:56:21.198393 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 15:56:21.198403 | orchestrator | ++ export TEMPEST=false 2025-09-17 15:56:21.198414 | orchestrator | ++ TEMPEST=false 2025-09-17 15:56:21.198424 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 15:56:21.198435 | orchestrator | ++ IS_ZUUL=true 2025-09-17 15:56:21.198445 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:56:21.198456 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 15:56:21.198506 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 15:56:21.198517 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 15:56:21.198528 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 15:56:21.198539 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 15:56:21.198550 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 15:56:21.198560 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 15:56:21.198571 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 15:56:21.198589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 15:56:21.198601 | orchestrator | + echo 2025-09-17 15:56:21.198611 | orchestrator | + echo '# PULL IMAGES' 2025-09-17 15:56:21.198622 | orchestrator | + echo 2025-09-17 15:56:21.198648 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-17 15:56:21.255329 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-17 15:56:21.255400 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-17 15:56:23.005120 | orchestrator | 2025-09-17 15:56:23 | INFO  | Trying to run play pull-images in environment custom 2025-09-17 15:56:33.122235 | orchestrator | 2025-09-17 15:56:33 | INFO  | Task 26296556-d57a-4df2-9133-5f148f5a9f0b (pull-images) was prepared for execution. 2025-09-17 15:56:33.122354 | orchestrator | 2025-09-17 15:56:33 | INFO  | Task 26296556-d57a-4df2-9133-5f148f5a9f0b is running in background. No more output. Check ARA for logs. 2025-09-17 15:56:35.216647 | orchestrator | 2025-09-17 15:56:35 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-17 15:56:45.384006 | orchestrator | 2025-09-17 15:56:45 | INFO  | Task 92f9e568-e3b3-4242-a704-61fa981921c4 (wipe-partitions) was prepared for execution. 2025-09-17 15:56:45.384110 | orchestrator | 2025-09-17 15:56:45 | INFO  | It takes a moment until task 92f9e568-e3b3-4242-a704-61fa981921c4 (wipe-partitions) has been started and output is visible here. 2025-09-17 15:56:58.123514 | orchestrator | 2025-09-17 15:56:58.123617 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-17 15:56:58.123635 | orchestrator | 2025-09-17 15:56:58.123647 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-17 15:56:58.123666 | orchestrator | Wednesday 17 September 2025 15:56:49 +0000 (0:00:00.147) 0:00:00.147 *** 2025-09-17 15:56:58.123677 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:56:58.123689 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:56:58.123701 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:56:58.123712 | orchestrator | 2025-09-17 15:56:58.123723 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-17 15:56:58.123757 | orchestrator | Wednesday 17 September 2025 15:56:49 +0000 (0:00:00.608) 0:00:00.756 *** 2025-09-17 15:56:58.123768 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:56:58.123779 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:56:58.123789 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:56:58.123804 | orchestrator | 2025-09-17 15:56:58.123815 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-17 15:56:58.123826 | orchestrator | Wednesday 17 September 2025 15:56:50 +0000 (0:00:00.222) 0:00:00.979 *** 2025-09-17 15:56:58.123837 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:56:58.123848 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:56:58.123859 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:56:58.123869 | orchestrator | 2025-09-17 15:56:58.123880 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-17 15:56:58.123890 | orchestrator | Wednesday 17 September 2025 15:56:50 +0000 (0:00:00.701) 0:00:01.681 *** 2025-09-17 15:56:58.123901 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:56:58.123912 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:56:58.123922 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:56:58.123933 | orchestrator | 2025-09-17 15:56:58.123943 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-17 15:56:58.123954 | orchestrator | Wednesday 17 September 2025 15:56:50 +0000 (0:00:00.215) 0:00:01.896 *** 2025-09-17 15:56:58.123965 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 15:56:58.123979 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 15:56:58.123990 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 15:56:58.124001 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 15:56:58.124011 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 15:56:58.124024 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 15:56:58.124035 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 15:56:58.124047 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 15:56:58.124059 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 15:56:58.124071 | orchestrator | 2025-09-17 15:56:58.124083 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-17 15:56:58.124096 | orchestrator | Wednesday 17 September 2025 15:56:53 +0000 (0:00:02.237) 0:00:04.134 *** 2025-09-17 15:56:58.124108 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 15:56:58.124120 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 15:56:58.124131 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 15:56:58.124144 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 15:56:58.124155 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 15:56:58.124167 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 15:56:58.124179 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 15:56:58.124191 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 15:56:58.124203 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 15:56:58.124215 | orchestrator | 2025-09-17 15:56:58.124227 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-17 15:56:58.124240 | orchestrator | Wednesday 17 September 2025 15:56:54 +0000 (0:00:01.357) 0:00:05.491 *** 2025-09-17 15:56:58.124251 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-17 15:56:58.124263 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-17 15:56:58.124275 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-17 15:56:58.124287 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-17 15:56:58.124299 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-17 15:56:58.124311 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-17 15:56:58.124323 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-17 15:56:58.124335 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-17 15:56:58.124361 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-17 15:56:58.124374 | orchestrator | 2025-09-17 15:56:58.124386 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-17 15:56:58.124396 | orchestrator | Wednesday 17 September 2025 15:56:56 +0000 (0:00:02.098) 0:00:07.590 *** 2025-09-17 15:56:58.124406 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:56:58.124417 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:56:58.124427 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:56:58.124438 | orchestrator | 2025-09-17 15:56:58.124448 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-17 15:56:58.124478 | orchestrator | Wednesday 17 September 2025 15:56:57 +0000 (0:00:00.625) 0:00:08.216 *** 2025-09-17 15:56:58.124490 | orchestrator | changed: [testbed-node-3] 2025-09-17 15:56:58.124500 | orchestrator | changed: [testbed-node-4] 2025-09-17 15:56:58.124510 | orchestrator | changed: [testbed-node-5] 2025-09-17 15:56:58.124520 | orchestrator | 2025-09-17 15:56:58.124531 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:56:58.124543 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:56:58.124555 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:56:58.124581 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:56:58.124592 | orchestrator | 2025-09-17 15:56:58.124603 | orchestrator | 2025-09-17 15:56:58.124614 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:56:58.124625 | orchestrator | Wednesday 17 September 2025 15:56:57 +0000 (0:00:00.596) 0:00:08.812 *** 2025-09-17 15:56:58.124635 | orchestrator | =============================================================================== 2025-09-17 15:56:58.124645 | orchestrator | Check device availability ----------------------------------------------- 2.24s 2025-09-17 15:56:58.124656 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2025-09-17 15:56:58.124667 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-09-17 15:56:58.124677 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-09-17 15:56:58.124688 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-09-17 15:56:58.124698 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2025-09-17 15:56:58.124709 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-09-17 15:56:58.124719 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-09-17 15:56:58.124730 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2025-09-17 15:57:10.050089 | orchestrator | 2025-09-17 15:57:10 | INFO  | Task 171331f5-5552-421e-ac82-1a4069467c41 (facts) was prepared for execution. 2025-09-17 15:57:10.050201 | orchestrator | 2025-09-17 15:57:10 | INFO  | It takes a moment until task 171331f5-5552-421e-ac82-1a4069467c41 (facts) has been started and output is visible here. 2025-09-17 15:57:20.907714 | orchestrator | 2025-09-17 15:57:20.907814 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-17 15:57:20.907830 | orchestrator | 2025-09-17 15:57:20.907842 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 15:57:20.907853 | orchestrator | Wednesday 17 September 2025 15:57:13 +0000 (0:00:00.251) 0:00:00.251 *** 2025-09-17 15:57:20.907864 | orchestrator | ok: [testbed-manager] 2025-09-17 15:57:20.907876 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:57:20.907886 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:57:20.907897 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:57:20.907938 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:20.907949 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:20.907960 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:57:20.907971 | orchestrator | 2025-09-17 15:57:20.907981 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 15:57:20.907992 | orchestrator | Wednesday 17 September 2025 15:57:14 +0000 (0:00:00.975) 0:00:01.227 *** 2025-09-17 15:57:20.908003 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:57:20.908014 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:57:20.908025 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:57:20.908036 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:57:20.908046 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:20.908057 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:20.908067 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:57:20.908082 | orchestrator | 2025-09-17 15:57:20.908100 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 15:57:20.908118 | orchestrator | 2025-09-17 15:57:20.908155 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 15:57:20.908176 | orchestrator | Wednesday 17 September 2025 15:57:15 +0000 (0:00:01.057) 0:00:02.285 *** 2025-09-17 15:57:20.908188 | orchestrator | ok: [testbed-node-2] 2025-09-17 15:57:20.908199 | orchestrator | ok: [testbed-node-0] 2025-09-17 15:57:20.908209 | orchestrator | ok: [testbed-node-1] 2025-09-17 15:57:20.908220 | orchestrator | ok: [testbed-manager] 2025-09-17 15:57:20.908231 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:20.908242 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:20.908252 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:57:20.908262 | orchestrator | 2025-09-17 15:57:20.908274 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 15:57:20.908285 | orchestrator | 2025-09-17 15:57:20.908297 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 15:57:20.908309 | orchestrator | Wednesday 17 September 2025 15:57:20 +0000 (0:00:04.536) 0:00:06.821 *** 2025-09-17 15:57:20.908321 | orchestrator | skipping: [testbed-manager] 2025-09-17 15:57:20.908332 | orchestrator | skipping: [testbed-node-0] 2025-09-17 15:57:20.908344 | orchestrator | skipping: [testbed-node-1] 2025-09-17 15:57:20.908355 | orchestrator | skipping: [testbed-node-2] 2025-09-17 15:57:20.908367 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:20.908378 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:20.908390 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:57:20.908402 | orchestrator | 2025-09-17 15:57:20.908414 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:57:20.908426 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908439 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908451 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908488 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908500 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908512 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908524 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 15:57:20.908536 | orchestrator | 2025-09-17 15:57:20.908547 | orchestrator | 2025-09-17 15:57:20.908559 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:57:20.908581 | orchestrator | Wednesday 17 September 2025 15:57:20 +0000 (0:00:00.444) 0:00:07.266 *** 2025-09-17 15:57:20.908592 | orchestrator | =============================================================================== 2025-09-17 15:57:20.908604 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2025-09-17 15:57:20.908615 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-09-17 15:57:20.908627 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-09-17 15:57:20.908639 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2025-09-17 15:57:22.786175 | orchestrator | 2025-09-17 15:57:22 | INFO  | Task 90644669-bac8-43fd-af01-4caa82e1958f (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-17 15:57:22.786264 | orchestrator | 2025-09-17 15:57:22 | INFO  | It takes a moment until task 90644669-bac8-43fd-af01-4caa82e1958f (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-17 15:57:34.143334 | orchestrator | 2025-09-17 15:57:34.143409 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 15:57:34.143416 | orchestrator | 2025-09-17 15:57:34.143420 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:57:34.143425 | orchestrator | Wednesday 17 September 2025 15:57:27 +0000 (0:00:00.323) 0:00:00.323 *** 2025-09-17 15:57:34.143430 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 15:57:34.143434 | orchestrator | 2025-09-17 15:57:34.143439 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:57:34.143443 | orchestrator | Wednesday 17 September 2025 15:57:27 +0000 (0:00:00.263) 0:00:00.586 *** 2025-09-17 15:57:34.143447 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:34.143451 | orchestrator | 2025-09-17 15:57:34.143472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143476 | orchestrator | Wednesday 17 September 2025 15:57:27 +0000 (0:00:00.226) 0:00:00.812 *** 2025-09-17 15:57:34.143480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-17 15:57:34.143485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-17 15:57:34.143494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-17 15:57:34.143499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-17 15:57:34.143503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-17 15:57:34.143507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-17 15:57:34.143511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-17 15:57:34.143515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-17 15:57:34.143519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-17 15:57:34.143523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-17 15:57:34.143527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-17 15:57:34.143531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-17 15:57:34.143535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-17 15:57:34.143539 | orchestrator | 2025-09-17 15:57:34.143542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143546 | orchestrator | Wednesday 17 September 2025 15:57:28 +0000 (0:00:00.361) 0:00:01.173 *** 2025-09-17 15:57:34.143551 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143555 | orchestrator | 2025-09-17 15:57:34.143572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143576 | orchestrator | Wednesday 17 September 2025 15:57:28 +0000 (0:00:00.386) 0:00:01.560 *** 2025-09-17 15:57:34.143580 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143584 | orchestrator | 2025-09-17 15:57:34.143588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143592 | orchestrator | Wednesday 17 September 2025 15:57:28 +0000 (0:00:00.181) 0:00:01.742 *** 2025-09-17 15:57:34.143595 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143599 | orchestrator | 2025-09-17 15:57:34.143603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143607 | orchestrator | Wednesday 17 September 2025 15:57:28 +0000 (0:00:00.170) 0:00:01.913 *** 2025-09-17 15:57:34.143611 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143615 | orchestrator | 2025-09-17 15:57:34.143622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143626 | orchestrator | Wednesday 17 September 2025 15:57:29 +0000 (0:00:00.169) 0:00:02.082 *** 2025-09-17 15:57:34.143629 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143633 | orchestrator | 2025-09-17 15:57:34.143637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143641 | orchestrator | Wednesday 17 September 2025 15:57:29 +0000 (0:00:00.194) 0:00:02.276 *** 2025-09-17 15:57:34.143645 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143649 | orchestrator | 2025-09-17 15:57:34.143653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143657 | orchestrator | Wednesday 17 September 2025 15:57:29 +0000 (0:00:00.185) 0:00:02.461 *** 2025-09-17 15:57:34.143660 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143664 | orchestrator | 2025-09-17 15:57:34.143668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143672 | orchestrator | Wednesday 17 September 2025 15:57:29 +0000 (0:00:00.194) 0:00:02.656 *** 2025-09-17 15:57:34.143676 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143679 | orchestrator | 2025-09-17 15:57:34.143683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143687 | orchestrator | Wednesday 17 September 2025 15:57:29 +0000 (0:00:00.181) 0:00:02.838 *** 2025-09-17 15:57:34.143691 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229) 2025-09-17 15:57:34.143696 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229) 2025-09-17 15:57:34.143700 | orchestrator | 2025-09-17 15:57:34.143704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143708 | orchestrator | Wednesday 17 September 2025 15:57:30 +0000 (0:00:00.367) 0:00:03.205 *** 2025-09-17 15:57:34.143723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560) 2025-09-17 15:57:34.143727 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560) 2025-09-17 15:57:34.143731 | orchestrator | 2025-09-17 15:57:34.143735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143741 | orchestrator | Wednesday 17 September 2025 15:57:30 +0000 (0:00:00.369) 0:00:03.575 *** 2025-09-17 15:57:34.143745 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53) 2025-09-17 15:57:34.143749 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53) 2025-09-17 15:57:34.143753 | orchestrator | 2025-09-17 15:57:34.143757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143760 | orchestrator | Wednesday 17 September 2025 15:57:31 +0000 (0:00:00.506) 0:00:04.082 *** 2025-09-17 15:57:34.143764 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372) 2025-09-17 15:57:34.143776 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372) 2025-09-17 15:57:34.143780 | orchestrator | 2025-09-17 15:57:34.143784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:34.143788 | orchestrator | Wednesday 17 September 2025 15:57:31 +0000 (0:00:00.535) 0:00:04.617 *** 2025-09-17 15:57:34.143792 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:57:34.143796 | orchestrator | 2025-09-17 15:57:34.143800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143803 | orchestrator | Wednesday 17 September 2025 15:57:32 +0000 (0:00:00.576) 0:00:05.194 *** 2025-09-17 15:57:34.143807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-17 15:57:34.143811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-17 15:57:34.143815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-17 15:57:34.143819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-17 15:57:34.143822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-17 15:57:34.143826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-17 15:57:34.143830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-17 15:57:34.143834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-17 15:57:34.143838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-17 15:57:34.143841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-17 15:57:34.143845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-17 15:57:34.143849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-17 15:57:34.143853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-17 15:57:34.143857 | orchestrator | 2025-09-17 15:57:34.143861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143864 | orchestrator | Wednesday 17 September 2025 15:57:32 +0000 (0:00:00.351) 0:00:05.546 *** 2025-09-17 15:57:34.143868 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143872 | orchestrator | 2025-09-17 15:57:34.143876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143880 | orchestrator | Wednesday 17 September 2025 15:57:32 +0000 (0:00:00.202) 0:00:05.749 *** 2025-09-17 15:57:34.143884 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143887 | orchestrator | 2025-09-17 15:57:34.143891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143895 | orchestrator | Wednesday 17 September 2025 15:57:32 +0000 (0:00:00.203) 0:00:05.952 *** 2025-09-17 15:57:34.143900 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143904 | orchestrator | 2025-09-17 15:57:34.143908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143913 | orchestrator | Wednesday 17 September 2025 15:57:33 +0000 (0:00:00.200) 0:00:06.153 *** 2025-09-17 15:57:34.143917 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143921 | orchestrator | 2025-09-17 15:57:34.143926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143930 | orchestrator | Wednesday 17 September 2025 15:57:33 +0000 (0:00:00.187) 0:00:06.341 *** 2025-09-17 15:57:34.143934 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143939 | orchestrator | 2025-09-17 15:57:34.143943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143951 | orchestrator | Wednesday 17 September 2025 15:57:33 +0000 (0:00:00.199) 0:00:06.540 *** 2025-09-17 15:57:34.143955 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143959 | orchestrator | 2025-09-17 15:57:34.143964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143968 | orchestrator | Wednesday 17 September 2025 15:57:33 +0000 (0:00:00.203) 0:00:06.743 *** 2025-09-17 15:57:34.143972 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:34.143976 | orchestrator | 2025-09-17 15:57:34.143981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:34.143985 | orchestrator | Wednesday 17 September 2025 15:57:33 +0000 (0:00:00.184) 0:00:06.928 *** 2025-09-17 15:57:34.143992 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.869818 | orchestrator | 2025-09-17 15:57:40.869914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:40.869931 | orchestrator | Wednesday 17 September 2025 15:57:34 +0000 (0:00:00.202) 0:00:07.131 *** 2025-09-17 15:57:40.869943 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-17 15:57:40.869955 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-17 15:57:40.869966 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-17 15:57:40.869977 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-17 15:57:40.869988 | orchestrator | 2025-09-17 15:57:40.869999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:40.870112 | orchestrator | Wednesday 17 September 2025 15:57:34 +0000 (0:00:00.856) 0:00:07.988 *** 2025-09-17 15:57:40.870129 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870140 | orchestrator | 2025-09-17 15:57:40.870151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:40.870162 | orchestrator | Wednesday 17 September 2025 15:57:35 +0000 (0:00:00.175) 0:00:08.164 *** 2025-09-17 15:57:40.870172 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870183 | orchestrator | 2025-09-17 15:57:40.870193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:40.870204 | orchestrator | Wednesday 17 September 2025 15:57:35 +0000 (0:00:00.179) 0:00:08.343 *** 2025-09-17 15:57:40.870215 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870225 | orchestrator | 2025-09-17 15:57:40.870236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:40.870247 | orchestrator | Wednesday 17 September 2025 15:57:35 +0000 (0:00:00.184) 0:00:08.527 *** 2025-09-17 15:57:40.870257 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870268 | orchestrator | 2025-09-17 15:57:40.870279 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 15:57:40.870289 | orchestrator | Wednesday 17 September 2025 15:57:35 +0000 (0:00:00.180) 0:00:08.708 *** 2025-09-17 15:57:40.870300 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-17 15:57:40.870311 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-17 15:57:40.870321 | orchestrator | 2025-09-17 15:57:40.870332 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 15:57:40.870343 | orchestrator | Wednesday 17 September 2025 15:57:35 +0000 (0:00:00.197) 0:00:08.905 *** 2025-09-17 15:57:40.870353 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870364 | orchestrator | 2025-09-17 15:57:40.870376 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 15:57:40.870387 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.136) 0:00:09.042 *** 2025-09-17 15:57:40.870400 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870412 | orchestrator | 2025-09-17 15:57:40.870424 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 15:57:40.870436 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.134) 0:00:09.176 *** 2025-09-17 15:57:40.870448 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870486 | orchestrator | 2025-09-17 15:57:40.870520 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 15:57:40.870531 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.125) 0:00:09.301 *** 2025-09-17 15:57:40.870542 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:40.870553 | orchestrator | 2025-09-17 15:57:40.870563 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 15:57:40.870574 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.147) 0:00:09.449 *** 2025-09-17 15:57:40.870585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c66c71d-5352-5b3e-b37c-d5d685617e79'}}) 2025-09-17 15:57:40.870596 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}}) 2025-09-17 15:57:40.870606 | orchestrator | 2025-09-17 15:57:40.870617 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 15:57:40.870627 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.167) 0:00:09.617 *** 2025-09-17 15:57:40.870638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c66c71d-5352-5b3e-b37c-d5d685617e79'}})  2025-09-17 15:57:40.870657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}})  2025-09-17 15:57:40.870668 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870679 | orchestrator | 2025-09-17 15:57:40.870690 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 15:57:40.870700 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.137) 0:00:09.754 *** 2025-09-17 15:57:40.870711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c66c71d-5352-5b3e-b37c-d5d685617e79'}})  2025-09-17 15:57:40.870721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}})  2025-09-17 15:57:40.870732 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870742 | orchestrator | 2025-09-17 15:57:40.870753 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 15:57:40.870764 | orchestrator | Wednesday 17 September 2025 15:57:36 +0000 (0:00:00.126) 0:00:09.880 *** 2025-09-17 15:57:40.870774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c66c71d-5352-5b3e-b37c-d5d685617e79'}})  2025-09-17 15:57:40.870785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}})  2025-09-17 15:57:40.870796 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870806 | orchestrator | 2025-09-17 15:57:40.870834 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 15:57:40.870846 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.261) 0:00:10.142 *** 2025-09-17 15:57:40.870856 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:40.870867 | orchestrator | 2025-09-17 15:57:40.870877 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 15:57:40.870888 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.128) 0:00:10.270 *** 2025-09-17 15:57:40.870899 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:57:40.870909 | orchestrator | 2025-09-17 15:57:40.870920 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 15:57:40.870930 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.130) 0:00:10.400 *** 2025-09-17 15:57:40.870941 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870951 | orchestrator | 2025-09-17 15:57:40.870962 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 15:57:40.870973 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.136) 0:00:10.537 *** 2025-09-17 15:57:40.870983 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.870994 | orchestrator | 2025-09-17 15:57:40.871004 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 15:57:40.871024 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.124) 0:00:10.661 *** 2025-09-17 15:57:40.871035 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.871045 | orchestrator | 2025-09-17 15:57:40.871056 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 15:57:40.871067 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.126) 0:00:10.788 *** 2025-09-17 15:57:40.871077 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:57:40.871088 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:57:40.871099 | orchestrator |  "sdb": { 2025-09-17 15:57:40.871110 | orchestrator |  "osd_lvm_uuid": "3c66c71d-5352-5b3e-b37c-d5d685617e79" 2025-09-17 15:57:40.871120 | orchestrator |  }, 2025-09-17 15:57:40.871131 | orchestrator |  "sdc": { 2025-09-17 15:57:40.871141 | orchestrator |  "osd_lvm_uuid": "e55f2ffc-2f4d-55e1-8c19-2e9977a4942c" 2025-09-17 15:57:40.871152 | orchestrator |  } 2025-09-17 15:57:40.871163 | orchestrator |  } 2025-09-17 15:57:40.871173 | orchestrator | } 2025-09-17 15:57:40.871184 | orchestrator | 2025-09-17 15:57:40.871195 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 15:57:40.871205 | orchestrator | Wednesday 17 September 2025 15:57:37 +0000 (0:00:00.131) 0:00:10.920 *** 2025-09-17 15:57:40.871216 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.871226 | orchestrator | 2025-09-17 15:57:40.871237 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 15:57:40.871248 | orchestrator | Wednesday 17 September 2025 15:57:38 +0000 (0:00:00.124) 0:00:11.044 *** 2025-09-17 15:57:40.871263 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.871274 | orchestrator | 2025-09-17 15:57:40.871285 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 15:57:40.871296 | orchestrator | Wednesday 17 September 2025 15:57:38 +0000 (0:00:00.128) 0:00:11.173 *** 2025-09-17 15:57:40.871306 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:57:40.871317 | orchestrator | 2025-09-17 15:57:40.871327 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 15:57:40.871338 | orchestrator | Wednesday 17 September 2025 15:57:38 +0000 (0:00:00.118) 0:00:11.291 *** 2025-09-17 15:57:40.871348 | orchestrator | changed: [testbed-node-3] => { 2025-09-17 15:57:40.871359 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 15:57:40.871369 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:57:40.871380 | orchestrator |  "sdb": { 2025-09-17 15:57:40.871391 | orchestrator |  "osd_lvm_uuid": "3c66c71d-5352-5b3e-b37c-d5d685617e79" 2025-09-17 15:57:40.871401 | orchestrator |  }, 2025-09-17 15:57:40.871412 | orchestrator |  "sdc": { 2025-09-17 15:57:40.871422 | orchestrator |  "osd_lvm_uuid": "e55f2ffc-2f4d-55e1-8c19-2e9977a4942c" 2025-09-17 15:57:40.871433 | orchestrator |  } 2025-09-17 15:57:40.871444 | orchestrator |  }, 2025-09-17 15:57:40.871480 | orchestrator |  "lvm_volumes": [ 2025-09-17 15:57:40.871492 | orchestrator |  { 2025-09-17 15:57:40.871502 | orchestrator |  "data": "osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79", 2025-09-17 15:57:40.871513 | orchestrator |  "data_vg": "ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79" 2025-09-17 15:57:40.871524 | orchestrator |  }, 2025-09-17 15:57:40.871534 | orchestrator |  { 2025-09-17 15:57:40.871545 | orchestrator |  "data": "osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c", 2025-09-17 15:57:40.871555 | orchestrator |  "data_vg": "ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c" 2025-09-17 15:57:40.871566 | orchestrator |  } 2025-09-17 15:57:40.871576 | orchestrator |  ] 2025-09-17 15:57:40.871587 | orchestrator |  } 2025-09-17 15:57:40.871597 | orchestrator | } 2025-09-17 15:57:40.871608 | orchestrator | 2025-09-17 15:57:40.871619 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 15:57:40.871629 | orchestrator | Wednesday 17 September 2025 15:57:38 +0000 (0:00:00.193) 0:00:11.485 *** 2025-09-17 15:57:40.871647 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 15:57:40.871658 | orchestrator | 2025-09-17 15:57:40.871668 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 15:57:40.871679 | orchestrator | 2025-09-17 15:57:40.871689 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:57:40.871700 | orchestrator | Wednesday 17 September 2025 15:57:40 +0000 (0:00:01.920) 0:00:13.405 *** 2025-09-17 15:57:40.871710 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 15:57:40.871721 | orchestrator | 2025-09-17 15:57:40.871732 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:57:40.871742 | orchestrator | Wednesday 17 September 2025 15:57:40 +0000 (0:00:00.234) 0:00:13.639 *** 2025-09-17 15:57:40.871752 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:40.871763 | orchestrator | 2025-09-17 15:57:40.871774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:40.871791 | orchestrator | Wednesday 17 September 2025 15:57:40 +0000 (0:00:00.219) 0:00:13.859 *** 2025-09-17 15:57:48.440441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-17 15:57:48.440581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-17 15:57:48.440593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-17 15:57:48.440600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-17 15:57:48.440608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-17 15:57:48.440615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-17 15:57:48.440622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-17 15:57:48.440630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-17 15:57:48.440638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-17 15:57:48.440645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-17 15:57:48.440668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-17 15:57:48.440676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-17 15:57:48.440682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-17 15:57:48.440690 | orchestrator | 2025-09-17 15:57:48.440702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440711 | orchestrator | Wednesday 17 September 2025 15:57:41 +0000 (0:00:00.382) 0:00:14.242 *** 2025-09-17 15:57:48.440719 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440727 | orchestrator | 2025-09-17 15:57:48.440734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440741 | orchestrator | Wednesday 17 September 2025 15:57:41 +0000 (0:00:00.192) 0:00:14.434 *** 2025-09-17 15:57:48.440748 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440756 | orchestrator | 2025-09-17 15:57:48.440763 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440771 | orchestrator | Wednesday 17 September 2025 15:57:41 +0000 (0:00:00.199) 0:00:14.633 *** 2025-09-17 15:57:48.440779 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440786 | orchestrator | 2025-09-17 15:57:48.440793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440799 | orchestrator | Wednesday 17 September 2025 15:57:41 +0000 (0:00:00.194) 0:00:14.828 *** 2025-09-17 15:57:48.440807 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440814 | orchestrator | 2025-09-17 15:57:48.440843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440851 | orchestrator | Wednesday 17 September 2025 15:57:42 +0000 (0:00:00.205) 0:00:15.034 *** 2025-09-17 15:57:48.440859 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440866 | orchestrator | 2025-09-17 15:57:48.440873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440879 | orchestrator | Wednesday 17 September 2025 15:57:42 +0000 (0:00:00.174) 0:00:15.208 *** 2025-09-17 15:57:48.440886 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440892 | orchestrator | 2025-09-17 15:57:48.440899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440905 | orchestrator | Wednesday 17 September 2025 15:57:42 +0000 (0:00:00.456) 0:00:15.665 *** 2025-09-17 15:57:48.440912 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440918 | orchestrator | 2025-09-17 15:57:48.440925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440931 | orchestrator | Wednesday 17 September 2025 15:57:42 +0000 (0:00:00.186) 0:00:15.851 *** 2025-09-17 15:57:48.440937 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.440944 | orchestrator | 2025-09-17 15:57:48.440949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440956 | orchestrator | Wednesday 17 September 2025 15:57:43 +0000 (0:00:00.213) 0:00:16.065 *** 2025-09-17 15:57:48.440963 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d) 2025-09-17 15:57:48.440971 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d) 2025-09-17 15:57:48.440978 | orchestrator | 2025-09-17 15:57:48.440986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.440993 | orchestrator | Wednesday 17 September 2025 15:57:43 +0000 (0:00:00.421) 0:00:16.486 *** 2025-09-17 15:57:48.441000 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2) 2025-09-17 15:57:48.441007 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2) 2025-09-17 15:57:48.441014 | orchestrator | 2025-09-17 15:57:48.441021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.441028 | orchestrator | Wednesday 17 September 2025 15:57:43 +0000 (0:00:00.377) 0:00:16.863 *** 2025-09-17 15:57:48.441034 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae) 2025-09-17 15:57:48.441042 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae) 2025-09-17 15:57:48.441050 | orchestrator | 2025-09-17 15:57:48.441057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.441065 | orchestrator | Wednesday 17 September 2025 15:57:44 +0000 (0:00:00.400) 0:00:17.263 *** 2025-09-17 15:57:48.441087 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d) 2025-09-17 15:57:48.441094 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d) 2025-09-17 15:57:48.441101 | orchestrator | 2025-09-17 15:57:48.441107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:48.441115 | orchestrator | Wednesday 17 September 2025 15:57:44 +0000 (0:00:00.420) 0:00:17.683 *** 2025-09-17 15:57:48.441121 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:57:48.441127 | orchestrator | 2025-09-17 15:57:48.441134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441147 | orchestrator | Wednesday 17 September 2025 15:57:45 +0000 (0:00:00.323) 0:00:18.007 *** 2025-09-17 15:57:48.441154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-17 15:57:48.441161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-17 15:57:48.441177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-17 15:57:48.441183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-17 15:57:48.441190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-17 15:57:48.441197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-17 15:57:48.441204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-17 15:57:48.441211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-17 15:57:48.441218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-17 15:57:48.441224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-17 15:57:48.441231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-17 15:57:48.441237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-17 15:57:48.441244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-17 15:57:48.441250 | orchestrator | 2025-09-17 15:57:48.441257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441264 | orchestrator | Wednesday 17 September 2025 15:57:45 +0000 (0:00:00.372) 0:00:18.379 *** 2025-09-17 15:57:48.441271 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441279 | orchestrator | 2025-09-17 15:57:48.441287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441295 | orchestrator | Wednesday 17 September 2025 15:57:45 +0000 (0:00:00.202) 0:00:18.582 *** 2025-09-17 15:57:48.441301 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441308 | orchestrator | 2025-09-17 15:57:48.441316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441323 | orchestrator | Wednesday 17 September 2025 15:57:46 +0000 (0:00:00.704) 0:00:19.286 *** 2025-09-17 15:57:48.441329 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441336 | orchestrator | 2025-09-17 15:57:48.441342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441349 | orchestrator | Wednesday 17 September 2025 15:57:46 +0000 (0:00:00.197) 0:00:19.484 *** 2025-09-17 15:57:48.441356 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441363 | orchestrator | 2025-09-17 15:57:48.441369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441377 | orchestrator | Wednesday 17 September 2025 15:57:46 +0000 (0:00:00.196) 0:00:19.681 *** 2025-09-17 15:57:48.441384 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441390 | orchestrator | 2025-09-17 15:57:48.441396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441402 | orchestrator | Wednesday 17 September 2025 15:57:46 +0000 (0:00:00.207) 0:00:19.888 *** 2025-09-17 15:57:48.441408 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441417 | orchestrator | 2025-09-17 15:57:48.441423 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441430 | orchestrator | Wednesday 17 September 2025 15:57:47 +0000 (0:00:00.211) 0:00:20.099 *** 2025-09-17 15:57:48.441437 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441443 | orchestrator | 2025-09-17 15:57:48.441450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441476 | orchestrator | Wednesday 17 September 2025 15:57:47 +0000 (0:00:00.211) 0:00:20.311 *** 2025-09-17 15:57:48.441483 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441490 | orchestrator | 2025-09-17 15:57:48.441497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441511 | orchestrator | Wednesday 17 September 2025 15:57:47 +0000 (0:00:00.207) 0:00:20.519 *** 2025-09-17 15:57:48.441518 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-17 15:57:48.441526 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-17 15:57:48.441532 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-17 15:57:48.441538 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-17 15:57:48.441544 | orchestrator | 2025-09-17 15:57:48.441551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:48.441557 | orchestrator | Wednesday 17 September 2025 15:57:48 +0000 (0:00:00.656) 0:00:21.175 *** 2025-09-17 15:57:48.441563 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:48.441569 | orchestrator | 2025-09-17 15:57:48.441583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:53.924144 | orchestrator | Wednesday 17 September 2025 15:57:48 +0000 (0:00:00.245) 0:00:21.420 *** 2025-09-17 15:57:53.924240 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924256 | orchestrator | 2025-09-17 15:57:53.924268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:53.924279 | orchestrator | Wednesday 17 September 2025 15:57:48 +0000 (0:00:00.301) 0:00:21.722 *** 2025-09-17 15:57:53.924290 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924301 | orchestrator | 2025-09-17 15:57:53.924311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:57:53.924322 | orchestrator | Wednesday 17 September 2025 15:57:48 +0000 (0:00:00.221) 0:00:21.943 *** 2025-09-17 15:57:53.924332 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924343 | orchestrator | 2025-09-17 15:57:53.924369 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 15:57:53.924381 | orchestrator | Wednesday 17 September 2025 15:57:49 +0000 (0:00:00.209) 0:00:22.153 *** 2025-09-17 15:57:53.924392 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-17 15:57:53.924402 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-17 15:57:53.924413 | orchestrator | 2025-09-17 15:57:53.924423 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 15:57:53.924434 | orchestrator | Wednesday 17 September 2025 15:57:49 +0000 (0:00:00.488) 0:00:22.641 *** 2025-09-17 15:57:53.924444 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924493 | orchestrator | 2025-09-17 15:57:53.924505 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 15:57:53.924515 | orchestrator | Wednesday 17 September 2025 15:57:49 +0000 (0:00:00.131) 0:00:22.772 *** 2025-09-17 15:57:53.924526 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924537 | orchestrator | 2025-09-17 15:57:53.924548 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 15:57:53.924558 | orchestrator | Wednesday 17 September 2025 15:57:49 +0000 (0:00:00.142) 0:00:22.915 *** 2025-09-17 15:57:53.924569 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924579 | orchestrator | 2025-09-17 15:57:53.924590 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 15:57:53.924600 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.132) 0:00:23.048 *** 2025-09-17 15:57:53.924610 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:53.924622 | orchestrator | 2025-09-17 15:57:53.924632 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 15:57:53.924643 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.108) 0:00:23.157 *** 2025-09-17 15:57:53.924654 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17f552da-d70b-5fe0-b76a-79be1323ddb4'}}) 2025-09-17 15:57:53.924664 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd72d4826-7802-5629-b85e-59298af53c3a'}}) 2025-09-17 15:57:53.924675 | orchestrator | 2025-09-17 15:57:53.924686 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 15:57:53.924718 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.125) 0:00:23.283 *** 2025-09-17 15:57:53.924731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17f552da-d70b-5fe0-b76a-79be1323ddb4'}})  2025-09-17 15:57:53.924745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd72d4826-7802-5629-b85e-59298af53c3a'}})  2025-09-17 15:57:53.924756 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924769 | orchestrator | 2025-09-17 15:57:53.924782 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 15:57:53.924793 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.106) 0:00:23.389 *** 2025-09-17 15:57:53.924805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17f552da-d70b-5fe0-b76a-79be1323ddb4'}})  2025-09-17 15:57:53.924817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd72d4826-7802-5629-b85e-59298af53c3a'}})  2025-09-17 15:57:53.924829 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924841 | orchestrator | 2025-09-17 15:57:53.924853 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 15:57:53.924865 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.106) 0:00:23.496 *** 2025-09-17 15:57:53.924877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17f552da-d70b-5fe0-b76a-79be1323ddb4'}})  2025-09-17 15:57:53.924890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd72d4826-7802-5629-b85e-59298af53c3a'}})  2025-09-17 15:57:53.924901 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.924914 | orchestrator | 2025-09-17 15:57:53.924925 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 15:57:53.924938 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.103) 0:00:23.599 *** 2025-09-17 15:57:53.924949 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:53.924961 | orchestrator | 2025-09-17 15:57:53.924973 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 15:57:53.924985 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.098) 0:00:23.697 *** 2025-09-17 15:57:53.924997 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:57:53.925008 | orchestrator | 2025-09-17 15:57:53.925020 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 15:57:53.925031 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.096) 0:00:23.794 *** 2025-09-17 15:57:53.925043 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925055 | orchestrator | 2025-09-17 15:57:53.925081 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 15:57:53.925092 | orchestrator | Wednesday 17 September 2025 15:57:50 +0000 (0:00:00.091) 0:00:23.885 *** 2025-09-17 15:57:53.925103 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925113 | orchestrator | 2025-09-17 15:57:53.925124 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 15:57:53.925134 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.231) 0:00:24.117 *** 2025-09-17 15:57:53.925145 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925155 | orchestrator | 2025-09-17 15:57:53.925165 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 15:57:53.925176 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.111) 0:00:24.228 *** 2025-09-17 15:57:53.925186 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:57:53.925196 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:57:53.925207 | orchestrator |  "sdb": { 2025-09-17 15:57:53.925218 | orchestrator |  "osd_lvm_uuid": "17f552da-d70b-5fe0-b76a-79be1323ddb4" 2025-09-17 15:57:53.925228 | orchestrator |  }, 2025-09-17 15:57:53.925238 | orchestrator |  "sdc": { 2025-09-17 15:57:53.925249 | orchestrator |  "osd_lvm_uuid": "d72d4826-7802-5629-b85e-59298af53c3a" 2025-09-17 15:57:53.925266 | orchestrator |  } 2025-09-17 15:57:53.925277 | orchestrator |  } 2025-09-17 15:57:53.925287 | orchestrator | } 2025-09-17 15:57:53.925298 | orchestrator | 2025-09-17 15:57:53.925308 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 15:57:53.925319 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.132) 0:00:24.361 *** 2025-09-17 15:57:53.925329 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925339 | orchestrator | 2025-09-17 15:57:53.925355 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 15:57:53.925366 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.115) 0:00:24.476 *** 2025-09-17 15:57:53.925377 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925387 | orchestrator | 2025-09-17 15:57:53.925398 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 15:57:53.925408 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.105) 0:00:24.582 *** 2025-09-17 15:57:53.925418 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:57:53.925429 | orchestrator | 2025-09-17 15:57:53.925439 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 15:57:53.925449 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.112) 0:00:24.694 *** 2025-09-17 15:57:53.925477 | orchestrator | changed: [testbed-node-4] => { 2025-09-17 15:57:53.925488 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 15:57:53.925498 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:57:53.925508 | orchestrator |  "sdb": { 2025-09-17 15:57:53.925519 | orchestrator |  "osd_lvm_uuid": "17f552da-d70b-5fe0-b76a-79be1323ddb4" 2025-09-17 15:57:53.925529 | orchestrator |  }, 2025-09-17 15:57:53.925544 | orchestrator |  "sdc": { 2025-09-17 15:57:53.925555 | orchestrator |  "osd_lvm_uuid": "d72d4826-7802-5629-b85e-59298af53c3a" 2025-09-17 15:57:53.925566 | orchestrator |  } 2025-09-17 15:57:53.925576 | orchestrator |  }, 2025-09-17 15:57:53.925590 | orchestrator |  "lvm_volumes": [ 2025-09-17 15:57:53.925609 | orchestrator |  { 2025-09-17 15:57:53.925624 | orchestrator |  "data": "osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4", 2025-09-17 15:57:53.925642 | orchestrator |  "data_vg": "ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4" 2025-09-17 15:57:53.925661 | orchestrator |  }, 2025-09-17 15:57:53.925679 | orchestrator |  { 2025-09-17 15:57:53.925691 | orchestrator |  "data": "osd-block-d72d4826-7802-5629-b85e-59298af53c3a", 2025-09-17 15:57:53.925701 | orchestrator |  "data_vg": "ceph-d72d4826-7802-5629-b85e-59298af53c3a" 2025-09-17 15:57:53.925712 | orchestrator |  } 2025-09-17 15:57:53.925722 | orchestrator |  ] 2025-09-17 15:57:53.925732 | orchestrator |  } 2025-09-17 15:57:53.925743 | orchestrator | } 2025-09-17 15:57:53.925753 | orchestrator | 2025-09-17 15:57:53.925763 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 15:57:53.925774 | orchestrator | Wednesday 17 September 2025 15:57:51 +0000 (0:00:00.185) 0:00:24.880 *** 2025-09-17 15:57:53.925785 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 15:57:53.925795 | orchestrator | 2025-09-17 15:57:53.925805 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-17 15:57:53.925816 | orchestrator | 2025-09-17 15:57:53.925826 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:57:53.925837 | orchestrator | Wednesday 17 September 2025 15:57:52 +0000 (0:00:00.884) 0:00:25.764 *** 2025-09-17 15:57:53.925847 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 15:57:53.925857 | orchestrator | 2025-09-17 15:57:53.925868 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:57:53.925878 | orchestrator | Wednesday 17 September 2025 15:57:53 +0000 (0:00:00.466) 0:00:26.231 *** 2025-09-17 15:57:53.925888 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:57:53.925906 | orchestrator | 2025-09-17 15:57:53.925917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:57:53.925927 | orchestrator | Wednesday 17 September 2025 15:57:53 +0000 (0:00:00.410) 0:00:26.642 *** 2025-09-17 15:57:53.925938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-17 15:57:53.925948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-17 15:57:53.925958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-17 15:57:53.925969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-17 15:57:53.925979 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-17 15:57:53.925989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-17 15:57:53.926007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-17 15:58:00.619946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-17 15:58:00.620037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-17 15:58:00.620051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-17 15:58:00.620062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-17 15:58:00.620073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-17 15:58:00.620084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-17 15:58:00.620096 | orchestrator | 2025-09-17 15:58:00.620107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620119 | orchestrator | Wednesday 17 September 2025 15:57:53 +0000 (0:00:00.272) 0:00:26.914 *** 2025-09-17 15:58:00.620130 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620142 | orchestrator | 2025-09-17 15:58:00.620152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620163 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.143) 0:00:27.057 *** 2025-09-17 15:58:00.620174 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620189 | orchestrator | 2025-09-17 15:58:00.620208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620228 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.140) 0:00:27.197 *** 2025-09-17 15:58:00.620247 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620265 | orchestrator | 2025-09-17 15:58:00.620284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620303 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.141) 0:00:27.339 *** 2025-09-17 15:58:00.620320 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620340 | orchestrator | 2025-09-17 15:58:00.620359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620375 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.146) 0:00:27.486 *** 2025-09-17 15:58:00.620386 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620397 | orchestrator | 2025-09-17 15:58:00.620408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620419 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.153) 0:00:27.639 *** 2025-09-17 15:58:00.620429 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620440 | orchestrator | 2025-09-17 15:58:00.620451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620501 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.143) 0:00:27.783 *** 2025-09-17 15:58:00.620514 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620526 | orchestrator | 2025-09-17 15:58:00.620562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620575 | orchestrator | Wednesday 17 September 2025 15:57:54 +0000 (0:00:00.163) 0:00:27.946 *** 2025-09-17 15:58:00.620588 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.620600 | orchestrator | 2025-09-17 15:58:00.620625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620639 | orchestrator | Wednesday 17 September 2025 15:57:55 +0000 (0:00:00.151) 0:00:28.098 *** 2025-09-17 15:58:00.620652 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767) 2025-09-17 15:58:00.620666 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767) 2025-09-17 15:58:00.620678 | orchestrator | 2025-09-17 15:58:00.620691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620704 | orchestrator | Wednesday 17 September 2025 15:57:55 +0000 (0:00:00.442) 0:00:28.541 *** 2025-09-17 15:58:00.620716 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16) 2025-09-17 15:58:00.620728 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16) 2025-09-17 15:58:00.620740 | orchestrator | 2025-09-17 15:58:00.620752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620764 | orchestrator | Wednesday 17 September 2025 15:57:56 +0000 (0:00:00.587) 0:00:29.129 *** 2025-09-17 15:58:00.620777 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3) 2025-09-17 15:58:00.620789 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3) 2025-09-17 15:58:00.620801 | orchestrator | 2025-09-17 15:58:00.620814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620826 | orchestrator | Wednesday 17 September 2025 15:57:56 +0000 (0:00:00.388) 0:00:29.518 *** 2025-09-17 15:58:00.620838 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae) 2025-09-17 15:58:00.620851 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae) 2025-09-17 15:58:00.620861 | orchestrator | 2025-09-17 15:58:00.620872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:00.620883 | orchestrator | Wednesday 17 September 2025 15:57:56 +0000 (0:00:00.384) 0:00:29.902 *** 2025-09-17 15:58:00.620893 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:58:00.620904 | orchestrator | 2025-09-17 15:58:00.620914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.620925 | orchestrator | Wednesday 17 September 2025 15:57:57 +0000 (0:00:00.273) 0:00:30.176 *** 2025-09-17 15:58:00.620953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-17 15:58:00.620965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-17 15:58:00.620982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-17 15:58:00.621002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-17 15:58:00.621022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-17 15:58:00.621042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-17 15:58:00.621071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-17 15:58:00.621094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-17 15:58:00.621114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-17 15:58:00.621141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-17 15:58:00.621152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-17 15:58:00.621162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-17 15:58:00.621173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-17 15:58:00.621183 | orchestrator | 2025-09-17 15:58:00.621194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621204 | orchestrator | Wednesday 17 September 2025 15:57:57 +0000 (0:00:00.372) 0:00:30.549 *** 2025-09-17 15:58:00.621215 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621225 | orchestrator | 2025-09-17 15:58:00.621235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621246 | orchestrator | Wednesday 17 September 2025 15:57:57 +0000 (0:00:00.155) 0:00:30.704 *** 2025-09-17 15:58:00.621256 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621267 | orchestrator | 2025-09-17 15:58:00.621277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621288 | orchestrator | Wednesday 17 September 2025 15:57:57 +0000 (0:00:00.165) 0:00:30.869 *** 2025-09-17 15:58:00.621298 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621309 | orchestrator | 2025-09-17 15:58:00.621319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621338 | orchestrator | Wednesday 17 September 2025 15:57:58 +0000 (0:00:00.178) 0:00:31.048 *** 2025-09-17 15:58:00.621356 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621374 | orchestrator | 2025-09-17 15:58:00.621398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621423 | orchestrator | Wednesday 17 September 2025 15:57:58 +0000 (0:00:00.189) 0:00:31.237 *** 2025-09-17 15:58:00.621441 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621483 | orchestrator | 2025-09-17 15:58:00.621497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621508 | orchestrator | Wednesday 17 September 2025 15:57:58 +0000 (0:00:00.173) 0:00:31.410 *** 2025-09-17 15:58:00.621518 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621529 | orchestrator | 2025-09-17 15:58:00.621539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621550 | orchestrator | Wednesday 17 September 2025 15:57:58 +0000 (0:00:00.468) 0:00:31.879 *** 2025-09-17 15:58:00.621560 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621571 | orchestrator | 2025-09-17 15:58:00.621581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621592 | orchestrator | Wednesday 17 September 2025 15:57:59 +0000 (0:00:00.194) 0:00:32.073 *** 2025-09-17 15:58:00.621602 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621612 | orchestrator | 2025-09-17 15:58:00.621623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621633 | orchestrator | Wednesday 17 September 2025 15:57:59 +0000 (0:00:00.150) 0:00:32.224 *** 2025-09-17 15:58:00.621644 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-17 15:58:00.621654 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-17 15:58:00.621665 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-17 15:58:00.621675 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-17 15:58:00.621686 | orchestrator | 2025-09-17 15:58:00.621696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621706 | orchestrator | Wednesday 17 September 2025 15:57:59 +0000 (0:00:00.648) 0:00:32.872 *** 2025-09-17 15:58:00.621717 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621728 | orchestrator | 2025-09-17 15:58:00.621738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621749 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.163) 0:00:33.036 *** 2025-09-17 15:58:00.621768 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621779 | orchestrator | 2025-09-17 15:58:00.621790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621800 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.181) 0:00:33.217 *** 2025-09-17 15:58:00.621811 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621821 | orchestrator | 2025-09-17 15:58:00.621832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:00.621842 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.189) 0:00:33.407 *** 2025-09-17 15:58:00.621860 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:00.621871 | orchestrator | 2025-09-17 15:58:00.621881 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-17 15:58:00.621901 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.195) 0:00:33.602 *** 2025-09-17 15:58:04.550627 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-17 15:58:04.550742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-17 15:58:04.550757 | orchestrator | 2025-09-17 15:58:04.550770 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-17 15:58:04.550781 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.143) 0:00:33.746 *** 2025-09-17 15:58:04.550791 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.550802 | orchestrator | 2025-09-17 15:58:04.550813 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-17 15:58:04.550824 | orchestrator | Wednesday 17 September 2025 15:58:00 +0000 (0:00:00.127) 0:00:33.874 *** 2025-09-17 15:58:04.550834 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.550845 | orchestrator | 2025-09-17 15:58:04.550856 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-17 15:58:04.550866 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.125) 0:00:34.000 *** 2025-09-17 15:58:04.550877 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.550888 | orchestrator | 2025-09-17 15:58:04.550898 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-17 15:58:04.550909 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.098) 0:00:34.098 *** 2025-09-17 15:58:04.550920 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:58:04.550931 | orchestrator | 2025-09-17 15:58:04.550941 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-17 15:58:04.550952 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.246) 0:00:34.344 *** 2025-09-17 15:58:04.550963 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}}) 2025-09-17 15:58:04.550975 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ce5409dd-a4db-5391-81df-07600c6136f3'}}) 2025-09-17 15:58:04.550986 | orchestrator | 2025-09-17 15:58:04.550996 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-17 15:58:04.551007 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.147) 0:00:34.492 *** 2025-09-17 15:58:04.551018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}})  2025-09-17 15:58:04.551029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ce5409dd-a4db-5391-81df-07600c6136f3'}})  2025-09-17 15:58:04.551040 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551051 | orchestrator | 2025-09-17 15:58:04.551075 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-17 15:58:04.551086 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.138) 0:00:34.631 *** 2025-09-17 15:58:04.551097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}})  2025-09-17 15:58:04.551108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ce5409dd-a4db-5391-81df-07600c6136f3'}})  2025-09-17 15:58:04.551136 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551147 | orchestrator | 2025-09-17 15:58:04.551157 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-17 15:58:04.551168 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.135) 0:00:34.767 *** 2025-09-17 15:58:04.551179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}})  2025-09-17 15:58:04.551193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ce5409dd-a4db-5391-81df-07600c6136f3'}})  2025-09-17 15:58:04.551205 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551218 | orchestrator | 2025-09-17 15:58:04.551230 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-17 15:58:04.551243 | orchestrator | Wednesday 17 September 2025 15:58:01 +0000 (0:00:00.144) 0:00:34.911 *** 2025-09-17 15:58:04.551255 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:58:04.551267 | orchestrator | 2025-09-17 15:58:04.551279 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-17 15:58:04.551291 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.127) 0:00:35.039 *** 2025-09-17 15:58:04.551303 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:58:04.551315 | orchestrator | 2025-09-17 15:58:04.551327 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-17 15:58:04.551339 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.147) 0:00:35.187 *** 2025-09-17 15:58:04.551351 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551363 | orchestrator | 2025-09-17 15:58:04.551375 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-17 15:58:04.551387 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.110) 0:00:35.297 *** 2025-09-17 15:58:04.551399 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551413 | orchestrator | 2025-09-17 15:58:04.551424 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-17 15:58:04.551436 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.120) 0:00:35.417 *** 2025-09-17 15:58:04.551448 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551476 | orchestrator | 2025-09-17 15:58:04.551488 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-17 15:58:04.551500 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.109) 0:00:35.526 *** 2025-09-17 15:58:04.551512 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:58:04.551524 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:58:04.551536 | orchestrator |  "sdb": { 2025-09-17 15:58:04.551547 | orchestrator |  "osd_lvm_uuid": "2618dc29-ef9a-5981-b8ae-0a6fa7f1f133" 2025-09-17 15:58:04.551575 | orchestrator |  }, 2025-09-17 15:58:04.551587 | orchestrator |  "sdc": { 2025-09-17 15:58:04.551597 | orchestrator |  "osd_lvm_uuid": "ce5409dd-a4db-5391-81df-07600c6136f3" 2025-09-17 15:58:04.551608 | orchestrator |  } 2025-09-17 15:58:04.551619 | orchestrator |  } 2025-09-17 15:58:04.551630 | orchestrator | } 2025-09-17 15:58:04.551641 | orchestrator | 2025-09-17 15:58:04.551652 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-17 15:58:04.551662 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.288) 0:00:35.815 *** 2025-09-17 15:58:04.551673 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551684 | orchestrator | 2025-09-17 15:58:04.551694 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-17 15:58:04.551705 | orchestrator | Wednesday 17 September 2025 15:58:02 +0000 (0:00:00.111) 0:00:35.926 *** 2025-09-17 15:58:04.551716 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551726 | orchestrator | 2025-09-17 15:58:04.551737 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-17 15:58:04.551797 | orchestrator | Wednesday 17 September 2025 15:58:03 +0000 (0:00:00.283) 0:00:36.210 *** 2025-09-17 15:58:04.551808 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:58:04.551819 | orchestrator | 2025-09-17 15:58:04.551830 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-17 15:58:04.551840 | orchestrator | Wednesday 17 September 2025 15:58:03 +0000 (0:00:00.125) 0:00:36.335 *** 2025-09-17 15:58:04.551851 | orchestrator | changed: [testbed-node-5] => { 2025-09-17 15:58:04.551862 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-17 15:58:04.551872 | orchestrator |  "ceph_osd_devices": { 2025-09-17 15:58:04.551883 | orchestrator |  "sdb": { 2025-09-17 15:58:04.551894 | orchestrator |  "osd_lvm_uuid": "2618dc29-ef9a-5981-b8ae-0a6fa7f1f133" 2025-09-17 15:58:04.551904 | orchestrator |  }, 2025-09-17 15:58:04.551915 | orchestrator |  "sdc": { 2025-09-17 15:58:04.551926 | orchestrator |  "osd_lvm_uuid": "ce5409dd-a4db-5391-81df-07600c6136f3" 2025-09-17 15:58:04.551936 | orchestrator |  } 2025-09-17 15:58:04.551947 | orchestrator |  }, 2025-09-17 15:58:04.551957 | orchestrator |  "lvm_volumes": [ 2025-09-17 15:58:04.551968 | orchestrator |  { 2025-09-17 15:58:04.551978 | orchestrator |  "data": "osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133", 2025-09-17 15:58:04.551989 | orchestrator |  "data_vg": "ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133" 2025-09-17 15:58:04.552000 | orchestrator |  }, 2025-09-17 15:58:04.552010 | orchestrator |  { 2025-09-17 15:58:04.552021 | orchestrator |  "data": "osd-block-ce5409dd-a4db-5391-81df-07600c6136f3", 2025-09-17 15:58:04.552031 | orchestrator |  "data_vg": "ceph-ce5409dd-a4db-5391-81df-07600c6136f3" 2025-09-17 15:58:04.552042 | orchestrator |  } 2025-09-17 15:58:04.552053 | orchestrator |  ] 2025-09-17 15:58:04.552064 | orchestrator |  } 2025-09-17 15:58:04.552074 | orchestrator | } 2025-09-17 15:58:04.552090 | orchestrator | 2025-09-17 15:58:04.552101 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-17 15:58:04.552111 | orchestrator | Wednesday 17 September 2025 15:58:03 +0000 (0:00:00.212) 0:00:36.547 *** 2025-09-17 15:58:04.552122 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 15:58:04.552133 | orchestrator | 2025-09-17 15:58:04.552143 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:58:04.552161 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 15:58:04.552173 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 15:58:04.552184 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 15:58:04.552194 | orchestrator | 2025-09-17 15:58:04.552205 | orchestrator | 2025-09-17 15:58:04.552216 | orchestrator | 2025-09-17 15:58:04.552226 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:58:04.552237 | orchestrator | Wednesday 17 September 2025 15:58:04 +0000 (0:00:00.979) 0:00:37.526 *** 2025-09-17 15:58:04.552247 | orchestrator | =============================================================================== 2025-09-17 15:58:04.552258 | orchestrator | Write configuration file ------------------------------------------------ 3.78s 2025-09-17 15:58:04.552268 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-09-17 15:58:04.552279 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2025-09-17 15:58:04.552289 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-09-17 15:58:04.552300 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-09-17 15:58:04.552311 | orchestrator | Get initial list of available block devices ----------------------------- 0.86s 2025-09-17 15:58:04.552327 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-09-17 15:58:04.552338 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-09-17 15:58:04.552349 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-17 15:58:04.552359 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-17 15:58:04.552369 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-17 15:58:04.552380 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-09-17 15:58:04.552403 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-09-17 15:58:04.552415 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.55s 2025-09-17 15:58:04.552432 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-09-17 15:58:04.783980 | orchestrator | Print DB devices -------------------------------------------------------- 0.52s 2025-09-17 15:58:04.784036 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.51s 2025-09-17 15:58:04.784045 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2025-09-17 15:58:04.784050 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.50s 2025-09-17 15:58:04.784057 | orchestrator | Set WAL devices config data --------------------------------------------- 0.48s 2025-09-17 15:58:27.028900 | orchestrator | 2025-09-17 15:58:27 | INFO  | Task 9a819189-f9e6-4648-9e61-012e4a661188 (sync inventory) is running in background. Output coming soon. 2025-09-17 15:58:44.888578 | orchestrator | 2025-09-17 15:58:28 | INFO  | Starting group_vars file reorganization 2025-09-17 15:58:44.888681 | orchestrator | 2025-09-17 15:58:28 | INFO  | Moved 0 file(s) to their respective directories 2025-09-17 15:58:44.888698 | orchestrator | 2025-09-17 15:58:28 | INFO  | Group_vars file reorganization completed 2025-09-17 15:58:44.888710 | orchestrator | 2025-09-17 15:58:29 | INFO  | Starting variable preparation from inventory 2025-09-17 15:58:44.888721 | orchestrator | 2025-09-17 15:58:30 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-17 15:58:44.888732 | orchestrator | 2025-09-17 15:58:30 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-17 15:58:44.888744 | orchestrator | 2025-09-17 15:58:30 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-17 15:58:44.888754 | orchestrator | 2025-09-17 15:58:30 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-17 15:58:44.888765 | orchestrator | 2025-09-17 15:58:30 | INFO  | Variable preparation completed 2025-09-17 15:58:44.888776 | orchestrator | 2025-09-17 15:58:31 | INFO  | Starting inventory overwrite handling 2025-09-17 15:58:44.888787 | orchestrator | 2025-09-17 15:58:31 | INFO  | Handling group overwrites in 99-overwrite 2025-09-17 15:58:44.888798 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group frr:children from 60-generic 2025-09-17 15:58:44.888809 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group storage:children from 50-kolla 2025-09-17 15:58:44.888820 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-17 15:58:44.888830 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-17 15:58:44.888841 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-17 15:58:44.888852 | orchestrator | 2025-09-17 15:58:31 | INFO  | Handling group overwrites in 20-roles 2025-09-17 15:58:44.888863 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-17 15:58:44.888895 | orchestrator | 2025-09-17 15:58:31 | INFO  | Removed 6 group(s) in total 2025-09-17 15:58:44.888907 | orchestrator | 2025-09-17 15:58:31 | INFO  | Inventory overwrite handling completed 2025-09-17 15:58:44.888918 | orchestrator | 2025-09-17 15:58:32 | INFO  | Starting merge of inventory files 2025-09-17 15:58:44.888928 | orchestrator | 2025-09-17 15:58:32 | INFO  | Inventory files merged successfully 2025-09-17 15:58:44.888939 | orchestrator | 2025-09-17 15:58:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-17 15:58:44.888950 | orchestrator | 2025-09-17 15:58:43 | INFO  | Successfully wrote ClusterShell configuration 2025-09-17 15:58:44.888961 | orchestrator | [master efe81c1] 2025-09-17-15-58 2025-09-17 15:58:44.888972 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-17 15:58:46.602311 | orchestrator | 2025-09-17 15:58:46 | INFO  | Task 61642044-0ab3-4d83-9c45-713f6ea752ae (ceph-create-lvm-devices) was prepared for execution. 2025-09-17 15:58:46.602401 | orchestrator | 2025-09-17 15:58:46 | INFO  | It takes a moment until task 61642044-0ab3-4d83-9c45-713f6ea752ae (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-17 15:58:56.900144 | orchestrator | 2025-09-17 15:58:56.900235 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 15:58:56.900252 | orchestrator | 2025-09-17 15:58:56.900264 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:58:56.900293 | orchestrator | Wednesday 17 September 2025 15:58:50 +0000 (0:00:00.238) 0:00:00.238 *** 2025-09-17 15:58:56.900315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 15:58:56.900327 | orchestrator | 2025-09-17 15:58:56.900338 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:58:56.900349 | orchestrator | Wednesday 17 September 2025 15:58:50 +0000 (0:00:00.241) 0:00:00.480 *** 2025-09-17 15:58:56.900360 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:58:56.900372 | orchestrator | 2025-09-17 15:58:56.900383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900394 | orchestrator | Wednesday 17 September 2025 15:58:50 +0000 (0:00:00.211) 0:00:00.691 *** 2025-09-17 15:58:56.900405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-17 15:58:56.900416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-17 15:58:56.900428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-17 15:58:56.900439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-17 15:58:56.900450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-17 15:58:56.900516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-17 15:58:56.900528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-17 15:58:56.900539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-17 15:58:56.900550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-17 15:58:56.900561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-17 15:58:56.900572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-17 15:58:56.900582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-17 15:58:56.900593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-17 15:58:56.900604 | orchestrator | 2025-09-17 15:58:56.900615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900647 | orchestrator | Wednesday 17 September 2025 15:58:50 +0000 (0:00:00.382) 0:00:01.074 *** 2025-09-17 15:58:56.900659 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900670 | orchestrator | 2025-09-17 15:58:56.900681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900705 | orchestrator | Wednesday 17 September 2025 15:58:51 +0000 (0:00:00.403) 0:00:01.478 *** 2025-09-17 15:58:56.900718 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900731 | orchestrator | 2025-09-17 15:58:56.900743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900755 | orchestrator | Wednesday 17 September 2025 15:58:51 +0000 (0:00:00.244) 0:00:01.722 *** 2025-09-17 15:58:56.900767 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900779 | orchestrator | 2025-09-17 15:58:56.900797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900809 | orchestrator | Wednesday 17 September 2025 15:58:51 +0000 (0:00:00.190) 0:00:01.913 *** 2025-09-17 15:58:56.900821 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900834 | orchestrator | 2025-09-17 15:58:56.900846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900858 | orchestrator | Wednesday 17 September 2025 15:58:51 +0000 (0:00:00.180) 0:00:02.093 *** 2025-09-17 15:58:56.900871 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900883 | orchestrator | 2025-09-17 15:58:56.900895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900907 | orchestrator | Wednesday 17 September 2025 15:58:52 +0000 (0:00:00.182) 0:00:02.276 *** 2025-09-17 15:58:56.900919 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900931 | orchestrator | 2025-09-17 15:58:56.900944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.900956 | orchestrator | Wednesday 17 September 2025 15:58:52 +0000 (0:00:00.190) 0:00:02.467 *** 2025-09-17 15:58:56.900968 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.900980 | orchestrator | 2025-09-17 15:58:56.900993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901005 | orchestrator | Wednesday 17 September 2025 15:58:52 +0000 (0:00:00.191) 0:00:02.658 *** 2025-09-17 15:58:56.901016 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901029 | orchestrator | 2025-09-17 15:58:56.901041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901054 | orchestrator | Wednesday 17 September 2025 15:58:52 +0000 (0:00:00.174) 0:00:02.833 *** 2025-09-17 15:58:56.901066 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229) 2025-09-17 15:58:56.901079 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229) 2025-09-17 15:58:56.901090 | orchestrator | 2025-09-17 15:58:56.901101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901112 | orchestrator | Wednesday 17 September 2025 15:58:53 +0000 (0:00:00.437) 0:00:03.271 *** 2025-09-17 15:58:56.901138 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560) 2025-09-17 15:58:56.901150 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560) 2025-09-17 15:58:56.901161 | orchestrator | 2025-09-17 15:58:56.901181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901200 | orchestrator | Wednesday 17 September 2025 15:58:53 +0000 (0:00:00.405) 0:00:03.677 *** 2025-09-17 15:58:56.901219 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53) 2025-09-17 15:58:56.901238 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53) 2025-09-17 15:58:56.901257 | orchestrator | 2025-09-17 15:58:56.901275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901298 | orchestrator | Wednesday 17 September 2025 15:58:53 +0000 (0:00:00.509) 0:00:04.187 *** 2025-09-17 15:58:56.901309 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372) 2025-09-17 15:58:56.901319 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372) 2025-09-17 15:58:56.901330 | orchestrator | 2025-09-17 15:58:56.901340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:58:56.901351 | orchestrator | Wednesday 17 September 2025 15:58:54 +0000 (0:00:00.528) 0:00:04.716 *** 2025-09-17 15:58:56.901361 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:58:56.901372 | orchestrator | 2025-09-17 15:58:56.901382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901393 | orchestrator | Wednesday 17 September 2025 15:58:55 +0000 (0:00:00.536) 0:00:05.252 *** 2025-09-17 15:58:56.901403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-17 15:58:56.901414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-17 15:58:56.901425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-17 15:58:56.901435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-17 15:58:56.901446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-17 15:58:56.901478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-17 15:58:56.901495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-17 15:58:56.901506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-17 15:58:56.901516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-17 15:58:56.901526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-17 15:58:56.901537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-17 15:58:56.901547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-17 15:58:56.901557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-17 15:58:56.901568 | orchestrator | 2025-09-17 15:58:56.901578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901589 | orchestrator | Wednesday 17 September 2025 15:58:55 +0000 (0:00:00.373) 0:00:05.626 *** 2025-09-17 15:58:56.901599 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901610 | orchestrator | 2025-09-17 15:58:56.901620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901631 | orchestrator | Wednesday 17 September 2025 15:58:55 +0000 (0:00:00.187) 0:00:05.813 *** 2025-09-17 15:58:56.901641 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901652 | orchestrator | 2025-09-17 15:58:56.901662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901672 | orchestrator | Wednesday 17 September 2025 15:58:55 +0000 (0:00:00.196) 0:00:06.010 *** 2025-09-17 15:58:56.901683 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901693 | orchestrator | 2025-09-17 15:58:56.901703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901714 | orchestrator | Wednesday 17 September 2025 15:58:55 +0000 (0:00:00.188) 0:00:06.199 *** 2025-09-17 15:58:56.901724 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901735 | orchestrator | 2025-09-17 15:58:56.901745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901756 | orchestrator | Wednesday 17 September 2025 15:58:56 +0000 (0:00:00.176) 0:00:06.375 *** 2025-09-17 15:58:56.901773 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901783 | orchestrator | 2025-09-17 15:58:56.901794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901804 | orchestrator | Wednesday 17 September 2025 15:58:56 +0000 (0:00:00.169) 0:00:06.544 *** 2025-09-17 15:58:56.901815 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901825 | orchestrator | 2025-09-17 15:58:56.901835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901846 | orchestrator | Wednesday 17 September 2025 15:58:56 +0000 (0:00:00.190) 0:00:06.735 *** 2025-09-17 15:58:56.901856 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:58:56.901867 | orchestrator | 2025-09-17 15:58:56.901877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:58:56.901888 | orchestrator | Wednesday 17 September 2025 15:58:56 +0000 (0:00:00.177) 0:00:06.912 *** 2025-09-17 15:58:56.901906 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553223 | orchestrator | 2025-09-17 15:59:04.553309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:04.553325 | orchestrator | Wednesday 17 September 2025 15:58:56 +0000 (0:00:00.188) 0:00:07.101 *** 2025-09-17 15:59:04.553337 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-17 15:59:04.553348 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-17 15:59:04.553359 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-17 15:59:04.553370 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-17 15:59:04.553381 | orchestrator | 2025-09-17 15:59:04.553392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:04.553402 | orchestrator | Wednesday 17 September 2025 15:58:57 +0000 (0:00:00.987) 0:00:08.088 *** 2025-09-17 15:59:04.553413 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553424 | orchestrator | 2025-09-17 15:59:04.553435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:04.553445 | orchestrator | Wednesday 17 September 2025 15:58:58 +0000 (0:00:00.212) 0:00:08.301 *** 2025-09-17 15:59:04.553500 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553513 | orchestrator | 2025-09-17 15:59:04.553524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:04.553535 | orchestrator | Wednesday 17 September 2025 15:58:58 +0000 (0:00:00.218) 0:00:08.519 *** 2025-09-17 15:59:04.553546 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553556 | orchestrator | 2025-09-17 15:59:04.553567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:04.553578 | orchestrator | Wednesday 17 September 2025 15:58:58 +0000 (0:00:00.217) 0:00:08.737 *** 2025-09-17 15:59:04.553589 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553599 | orchestrator | 2025-09-17 15:59:04.553610 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 15:59:04.553621 | orchestrator | Wednesday 17 September 2025 15:58:58 +0000 (0:00:00.216) 0:00:08.954 *** 2025-09-17 15:59:04.553632 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553642 | orchestrator | 2025-09-17 15:59:04.553653 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 15:59:04.553663 | orchestrator | Wednesday 17 September 2025 15:58:58 +0000 (0:00:00.152) 0:00:09.107 *** 2025-09-17 15:59:04.553674 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c66c71d-5352-5b3e-b37c-d5d685617e79'}}) 2025-09-17 15:59:04.553686 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}}) 2025-09-17 15:59:04.553696 | orchestrator | 2025-09-17 15:59:04.553709 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 15:59:04.553728 | orchestrator | Wednesday 17 September 2025 15:58:59 +0000 (0:00:00.180) 0:00:09.287 *** 2025-09-17 15:59:04.553749 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'}) 2025-09-17 15:59:04.553792 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}) 2025-09-17 15:59:04.553806 | orchestrator | 2025-09-17 15:59:04.553833 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 15:59:04.553846 | orchestrator | Wednesday 17 September 2025 15:59:01 +0000 (0:00:01.968) 0:00:11.256 *** 2025-09-17 15:59:04.553865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.553879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.553891 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.553903 | orchestrator | 2025-09-17 15:59:04.553916 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 15:59:04.553928 | orchestrator | Wednesday 17 September 2025 15:59:01 +0000 (0:00:00.137) 0:00:11.394 *** 2025-09-17 15:59:04.553940 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'}) 2025-09-17 15:59:04.553953 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}) 2025-09-17 15:59:04.553965 | orchestrator | 2025-09-17 15:59:04.553977 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 15:59:04.553990 | orchestrator | Wednesday 17 September 2025 15:59:02 +0000 (0:00:01.391) 0:00:12.785 *** 2025-09-17 15:59:04.554002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554087 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554100 | orchestrator | 2025-09-17 15:59:04.554112 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 15:59:04.554124 | orchestrator | Wednesday 17 September 2025 15:59:02 +0000 (0:00:00.144) 0:00:12.929 *** 2025-09-17 15:59:04.554137 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554149 | orchestrator | 2025-09-17 15:59:04.554162 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 15:59:04.554191 | orchestrator | Wednesday 17 September 2025 15:59:02 +0000 (0:00:00.131) 0:00:13.060 *** 2025-09-17 15:59:04.554203 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554214 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554224 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554235 | orchestrator | 2025-09-17 15:59:04.554245 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 15:59:04.554256 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.289) 0:00:13.350 *** 2025-09-17 15:59:04.554267 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554277 | orchestrator | 2025-09-17 15:59:04.554288 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 15:59:04.554298 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.130) 0:00:13.480 *** 2025-09-17 15:59:04.554309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554328 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554339 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554349 | orchestrator | 2025-09-17 15:59:04.554360 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 15:59:04.554371 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.153) 0:00:13.633 *** 2025-09-17 15:59:04.554381 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554392 | orchestrator | 2025-09-17 15:59:04.554402 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 15:59:04.554413 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.174) 0:00:13.807 *** 2025-09-17 15:59:04.554423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554444 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554455 | orchestrator | 2025-09-17 15:59:04.554511 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 15:59:04.554529 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.144) 0:00:13.952 *** 2025-09-17 15:59:04.554547 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:04.554566 | orchestrator | 2025-09-17 15:59:04.554586 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 15:59:04.554598 | orchestrator | Wednesday 17 September 2025 15:59:03 +0000 (0:00:00.128) 0:00:14.081 *** 2025-09-17 15:59:04.554608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554636 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554647 | orchestrator | 2025-09-17 15:59:04.554657 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 15:59:04.554668 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.156) 0:00:14.237 *** 2025-09-17 15:59:04.554679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554700 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554710 | orchestrator | 2025-09-17 15:59:04.554721 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 15:59:04.554732 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.144) 0:00:14.382 *** 2025-09-17 15:59:04.554742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:04.554753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:04.554764 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554774 | orchestrator | 2025-09-17 15:59:04.554785 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 15:59:04.554796 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.133) 0:00:14.515 *** 2025-09-17 15:59:04.554806 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554817 | orchestrator | 2025-09-17 15:59:04.554827 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 15:59:04.554845 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.130) 0:00:14.646 *** 2025-09-17 15:59:04.554855 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:04.554866 | orchestrator | 2025-09-17 15:59:04.554884 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 15:59:10.290370 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.108) 0:00:14.755 *** 2025-09-17 15:59:10.290436 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290445 | orchestrator | 2025-09-17 15:59:10.290451 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 15:59:10.290485 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.125) 0:00:14.880 *** 2025-09-17 15:59:10.290492 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:59:10.290498 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 15:59:10.290504 | orchestrator | } 2025-09-17 15:59:10.290510 | orchestrator | 2025-09-17 15:59:10.290516 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 15:59:10.290522 | orchestrator | Wednesday 17 September 2025 15:59:04 +0000 (0:00:00.261) 0:00:15.142 *** 2025-09-17 15:59:10.290527 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:59:10.290533 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 15:59:10.290539 | orchestrator | } 2025-09-17 15:59:10.290544 | orchestrator | 2025-09-17 15:59:10.290550 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 15:59:10.290556 | orchestrator | Wednesday 17 September 2025 15:59:05 +0000 (0:00:00.130) 0:00:15.272 *** 2025-09-17 15:59:10.290562 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:59:10.290567 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 15:59:10.290573 | orchestrator | } 2025-09-17 15:59:10.290579 | orchestrator | 2025-09-17 15:59:10.290585 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 15:59:10.290591 | orchestrator | Wednesday 17 September 2025 15:59:05 +0000 (0:00:00.134) 0:00:15.406 *** 2025-09-17 15:59:10.290596 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:10.290602 | orchestrator | 2025-09-17 15:59:10.290607 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 15:59:10.290613 | orchestrator | Wednesday 17 September 2025 15:59:05 +0000 (0:00:00.644) 0:00:16.051 *** 2025-09-17 15:59:10.290619 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:10.290624 | orchestrator | 2025-09-17 15:59:10.290630 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 15:59:10.290636 | orchestrator | Wednesday 17 September 2025 15:59:06 +0000 (0:00:00.497) 0:00:16.549 *** 2025-09-17 15:59:10.290641 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:10.290647 | orchestrator | 2025-09-17 15:59:10.290653 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 15:59:10.290659 | orchestrator | Wednesday 17 September 2025 15:59:06 +0000 (0:00:00.511) 0:00:17.061 *** 2025-09-17 15:59:10.290664 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:10.290670 | orchestrator | 2025-09-17 15:59:10.290675 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 15:59:10.290681 | orchestrator | Wednesday 17 September 2025 15:59:06 +0000 (0:00:00.132) 0:00:17.194 *** 2025-09-17 15:59:10.290687 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290692 | orchestrator | 2025-09-17 15:59:10.290698 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 15:59:10.290703 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.114) 0:00:17.309 *** 2025-09-17 15:59:10.290709 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290715 | orchestrator | 2025-09-17 15:59:10.290720 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 15:59:10.290726 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.106) 0:00:17.415 *** 2025-09-17 15:59:10.290732 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:59:10.290750 | orchestrator |  "vgs_report": { 2025-09-17 15:59:10.290756 | orchestrator |  "vg": [] 2025-09-17 15:59:10.290762 | orchestrator |  } 2025-09-17 15:59:10.290768 | orchestrator | } 2025-09-17 15:59:10.290773 | orchestrator | 2025-09-17 15:59:10.290779 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 15:59:10.290785 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.128) 0:00:17.543 *** 2025-09-17 15:59:10.290790 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290796 | orchestrator | 2025-09-17 15:59:10.290802 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 15:59:10.290807 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.131) 0:00:17.674 *** 2025-09-17 15:59:10.290813 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290818 | orchestrator | 2025-09-17 15:59:10.290824 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 15:59:10.290829 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.137) 0:00:17.812 *** 2025-09-17 15:59:10.290835 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290841 | orchestrator | 2025-09-17 15:59:10.290846 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 15:59:10.290852 | orchestrator | Wednesday 17 September 2025 15:59:07 +0000 (0:00:00.315) 0:00:18.127 *** 2025-09-17 15:59:10.290857 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290863 | orchestrator | 2025-09-17 15:59:10.290868 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 15:59:10.290874 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.126) 0:00:18.254 *** 2025-09-17 15:59:10.290880 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290885 | orchestrator | 2025-09-17 15:59:10.290901 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 15:59:10.290907 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.126) 0:00:18.381 *** 2025-09-17 15:59:10.290912 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290918 | orchestrator | 2025-09-17 15:59:10.290923 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 15:59:10.290929 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.127) 0:00:18.509 *** 2025-09-17 15:59:10.290935 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290940 | orchestrator | 2025-09-17 15:59:10.290946 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 15:59:10.290951 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.160) 0:00:18.669 *** 2025-09-17 15:59:10.290957 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290963 | orchestrator | 2025-09-17 15:59:10.290968 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 15:59:10.290985 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.118) 0:00:18.787 *** 2025-09-17 15:59:10.290991 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.290996 | orchestrator | 2025-09-17 15:59:10.291002 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 15:59:10.291008 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.129) 0:00:18.917 *** 2025-09-17 15:59:10.291013 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291019 | orchestrator | 2025-09-17 15:59:10.291024 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 15:59:10.291030 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.122) 0:00:19.039 *** 2025-09-17 15:59:10.291035 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291041 | orchestrator | 2025-09-17 15:59:10.291046 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 15:59:10.291052 | orchestrator | Wednesday 17 September 2025 15:59:08 +0000 (0:00:00.114) 0:00:19.153 *** 2025-09-17 15:59:10.291057 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291063 | orchestrator | 2025-09-17 15:59:10.291068 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 15:59:10.291079 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.116) 0:00:19.269 *** 2025-09-17 15:59:10.291085 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291090 | orchestrator | 2025-09-17 15:59:10.291096 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 15:59:10.291102 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.127) 0:00:19.397 *** 2025-09-17 15:59:10.291107 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291113 | orchestrator | 2025-09-17 15:59:10.291118 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 15:59:10.291124 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.112) 0:00:19.510 *** 2025-09-17 15:59:10.291130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:10.291142 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291147 | orchestrator | 2025-09-17 15:59:10.291153 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 15:59:10.291159 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.140) 0:00:19.650 *** 2025-09-17 15:59:10.291164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:10.291175 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291181 | orchestrator | 2025-09-17 15:59:10.291187 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 15:59:10.291192 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.281) 0:00:19.932 *** 2025-09-17 15:59:10.291201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:10.291212 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291218 | orchestrator | 2025-09-17 15:59:10.291223 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 15:59:10.291229 | orchestrator | Wednesday 17 September 2025 15:59:09 +0000 (0:00:00.141) 0:00:20.073 *** 2025-09-17 15:59:10.291234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:10.291245 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291251 | orchestrator | 2025-09-17 15:59:10.291257 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 15:59:10.291262 | orchestrator | Wednesday 17 September 2025 15:59:10 +0000 (0:00:00.139) 0:00:20.212 *** 2025-09-17 15:59:10.291267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:10.291279 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:10.291284 | orchestrator | 2025-09-17 15:59:10.291290 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 15:59:10.291299 | orchestrator | Wednesday 17 September 2025 15:59:10 +0000 (0:00:00.142) 0:00:20.354 *** 2025-09-17 15:59:10.291304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:10.291314 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.269831 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.269916 | orchestrator | 2025-09-17 15:59:15.269932 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 15:59:15.269945 | orchestrator | Wednesday 17 September 2025 15:59:10 +0000 (0:00:00.138) 0:00:20.493 *** 2025-09-17 15:59:15.269957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:15.269969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.269981 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.269992 | orchestrator | 2025-09-17 15:59:15.270003 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 15:59:15.270045 | orchestrator | Wednesday 17 September 2025 15:59:10 +0000 (0:00:00.124) 0:00:20.618 *** 2025-09-17 15:59:15.270069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:15.270099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.270121 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.270141 | orchestrator | 2025-09-17 15:59:15.270161 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 15:59:15.270181 | orchestrator | Wednesday 17 September 2025 15:59:10 +0000 (0:00:00.137) 0:00:20.756 *** 2025-09-17 15:59:15.270201 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:15.270221 | orchestrator | 2025-09-17 15:59:15.270239 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 15:59:15.270259 | orchestrator | Wednesday 17 September 2025 15:59:11 +0000 (0:00:00.509) 0:00:21.265 *** 2025-09-17 15:59:15.270278 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:15.270299 | orchestrator | 2025-09-17 15:59:15.270318 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 15:59:15.270338 | orchestrator | Wednesday 17 September 2025 15:59:11 +0000 (0:00:00.470) 0:00:21.736 *** 2025-09-17 15:59:15.270357 | orchestrator | ok: [testbed-node-3] 2025-09-17 15:59:15.270378 | orchestrator | 2025-09-17 15:59:15.270400 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 15:59:15.270421 | orchestrator | Wednesday 17 September 2025 15:59:11 +0000 (0:00:00.125) 0:00:21.861 *** 2025-09-17 15:59:15.270444 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'vg_name': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'}) 2025-09-17 15:59:15.270505 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'vg_name': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}) 2025-09-17 15:59:15.270520 | orchestrator | 2025-09-17 15:59:15.270532 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 15:59:15.270544 | orchestrator | Wednesday 17 September 2025 15:59:11 +0000 (0:00:00.152) 0:00:22.013 *** 2025-09-17 15:59:15.270556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:15.270569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.270605 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.270625 | orchestrator | 2025-09-17 15:59:15.270644 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 15:59:15.270662 | orchestrator | Wednesday 17 September 2025 15:59:11 +0000 (0:00:00.144) 0:00:22.158 *** 2025-09-17 15:59:15.270680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:15.270698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.270717 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.270737 | orchestrator | 2025-09-17 15:59:15.270755 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 15:59:15.270774 | orchestrator | Wednesday 17 September 2025 15:59:12 +0000 (0:00:00.284) 0:00:22.442 *** 2025-09-17 15:59:15.270792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'})  2025-09-17 15:59:15.270812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'})  2025-09-17 15:59:15.270831 | orchestrator | skipping: [testbed-node-3] 2025-09-17 15:59:15.270849 | orchestrator | 2025-09-17 15:59:15.270865 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 15:59:15.270876 | orchestrator | Wednesday 17 September 2025 15:59:12 +0000 (0:00:00.151) 0:00:22.594 *** 2025-09-17 15:59:15.270886 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 15:59:15.270897 | orchestrator |  "lvm_report": { 2025-09-17 15:59:15.270908 | orchestrator |  "lv": [ 2025-09-17 15:59:15.270919 | orchestrator |  { 2025-09-17 15:59:15.270949 | orchestrator |  "lv_name": "osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79", 2025-09-17 15:59:15.270961 | orchestrator |  "vg_name": "ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79" 2025-09-17 15:59:15.270971 | orchestrator |  }, 2025-09-17 15:59:15.270982 | orchestrator |  { 2025-09-17 15:59:15.270992 | orchestrator |  "lv_name": "osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c", 2025-09-17 15:59:15.271002 | orchestrator |  "vg_name": "ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c" 2025-09-17 15:59:15.271013 | orchestrator |  } 2025-09-17 15:59:15.271023 | orchestrator |  ], 2025-09-17 15:59:15.271034 | orchestrator |  "pv": [ 2025-09-17 15:59:15.271044 | orchestrator |  { 2025-09-17 15:59:15.271054 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 15:59:15.271069 | orchestrator |  "vg_name": "ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79" 2025-09-17 15:59:15.271088 | orchestrator |  }, 2025-09-17 15:59:15.271106 | orchestrator |  { 2025-09-17 15:59:15.271123 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 15:59:15.271139 | orchestrator |  "vg_name": "ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c" 2025-09-17 15:59:15.271156 | orchestrator |  } 2025-09-17 15:59:15.271173 | orchestrator |  ] 2025-09-17 15:59:15.271191 | orchestrator |  } 2025-09-17 15:59:15.271210 | orchestrator | } 2025-09-17 15:59:15.271229 | orchestrator | 2025-09-17 15:59:15.271249 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 15:59:15.271268 | orchestrator | 2025-09-17 15:59:15.271279 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:59:15.271290 | orchestrator | Wednesday 17 September 2025 15:59:12 +0000 (0:00:00.300) 0:00:22.894 *** 2025-09-17 15:59:15.271301 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-17 15:59:15.271312 | orchestrator | 2025-09-17 15:59:15.271338 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:59:15.271357 | orchestrator | Wednesday 17 September 2025 15:59:12 +0000 (0:00:00.216) 0:00:23.111 *** 2025-09-17 15:59:15.271376 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:15.271395 | orchestrator | 2025-09-17 15:59:15.271413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271430 | orchestrator | Wednesday 17 September 2025 15:59:13 +0000 (0:00:00.235) 0:00:23.347 *** 2025-09-17 15:59:15.271490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-17 15:59:15.271512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-17 15:59:15.271530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-17 15:59:15.271549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-17 15:59:15.271569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-17 15:59:15.271587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-17 15:59:15.271607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-17 15:59:15.271625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-17 15:59:15.271649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-17 15:59:15.271660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-17 15:59:15.271671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-17 15:59:15.271682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-17 15:59:15.271693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-17 15:59:15.271703 | orchestrator | 2025-09-17 15:59:15.271714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271724 | orchestrator | Wednesday 17 September 2025 15:59:13 +0000 (0:00:00.398) 0:00:23.745 *** 2025-09-17 15:59:15.271735 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.271745 | orchestrator | 2025-09-17 15:59:15.271756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271767 | orchestrator | Wednesday 17 September 2025 15:59:13 +0000 (0:00:00.190) 0:00:23.936 *** 2025-09-17 15:59:15.271777 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.271788 | orchestrator | 2025-09-17 15:59:15.271798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271814 | orchestrator | Wednesday 17 September 2025 15:59:13 +0000 (0:00:00.179) 0:00:24.115 *** 2025-09-17 15:59:15.271832 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.271850 | orchestrator | 2025-09-17 15:59:15.271868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271885 | orchestrator | Wednesday 17 September 2025 15:59:14 +0000 (0:00:00.192) 0:00:24.307 *** 2025-09-17 15:59:15.271902 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.271920 | orchestrator | 2025-09-17 15:59:15.271938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.271956 | orchestrator | Wednesday 17 September 2025 15:59:14 +0000 (0:00:00.603) 0:00:24.911 *** 2025-09-17 15:59:15.271975 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.271993 | orchestrator | 2025-09-17 15:59:15.272010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.272021 | orchestrator | Wednesday 17 September 2025 15:59:14 +0000 (0:00:00.192) 0:00:25.103 *** 2025-09-17 15:59:15.272031 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.272042 | orchestrator | 2025-09-17 15:59:15.272052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:15.272073 | orchestrator | Wednesday 17 September 2025 15:59:15 +0000 (0:00:00.201) 0:00:25.305 *** 2025-09-17 15:59:15.272083 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:15.272094 | orchestrator | 2025-09-17 15:59:15.272117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087131 | orchestrator | Wednesday 17 September 2025 15:59:15 +0000 (0:00:00.161) 0:00:25.466 *** 2025-09-17 15:59:25.087225 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.087241 | orchestrator | 2025-09-17 15:59:25.087253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087264 | orchestrator | Wednesday 17 September 2025 15:59:15 +0000 (0:00:00.190) 0:00:25.656 *** 2025-09-17 15:59:25.087275 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d) 2025-09-17 15:59:25.087287 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d) 2025-09-17 15:59:25.087297 | orchestrator | 2025-09-17 15:59:25.087308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087319 | orchestrator | Wednesday 17 September 2025 15:59:15 +0000 (0:00:00.427) 0:00:26.084 *** 2025-09-17 15:59:25.087329 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2) 2025-09-17 15:59:25.087340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2) 2025-09-17 15:59:25.087351 | orchestrator | 2025-09-17 15:59:25.087362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087372 | orchestrator | Wednesday 17 September 2025 15:59:16 +0000 (0:00:00.465) 0:00:26.549 *** 2025-09-17 15:59:25.087383 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae) 2025-09-17 15:59:25.087394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae) 2025-09-17 15:59:25.087404 | orchestrator | 2025-09-17 15:59:25.087415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087425 | orchestrator | Wednesday 17 September 2025 15:59:16 +0000 (0:00:00.403) 0:00:26.953 *** 2025-09-17 15:59:25.087436 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d) 2025-09-17 15:59:25.087447 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d) 2025-09-17 15:59:25.087457 | orchestrator | 2025-09-17 15:59:25.087494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:25.087505 | orchestrator | Wednesday 17 September 2025 15:59:17 +0000 (0:00:00.473) 0:00:27.427 *** 2025-09-17 15:59:25.087516 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:59:25.087526 | orchestrator | 2025-09-17 15:59:25.087537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087548 | orchestrator | Wednesday 17 September 2025 15:59:17 +0000 (0:00:00.292) 0:00:27.719 *** 2025-09-17 15:59:25.087558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-17 15:59:25.087584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-17 15:59:25.087595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-17 15:59:25.087606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-17 15:59:25.087616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-17 15:59:25.087627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-17 15:59:25.087637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-17 15:59:25.087670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-17 15:59:25.087683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-17 15:59:25.087695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-17 15:59:25.087707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-17 15:59:25.087719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-17 15:59:25.087732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-17 15:59:25.087745 | orchestrator | 2025-09-17 15:59:25.087757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087769 | orchestrator | Wednesday 17 September 2025 15:59:18 +0000 (0:00:00.549) 0:00:28.268 *** 2025-09-17 15:59:25.087782 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.087794 | orchestrator | 2025-09-17 15:59:25.087806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087818 | orchestrator | Wednesday 17 September 2025 15:59:18 +0000 (0:00:00.213) 0:00:28.482 *** 2025-09-17 15:59:25.087830 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.087842 | orchestrator | 2025-09-17 15:59:25.087855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087867 | orchestrator | Wednesday 17 September 2025 15:59:18 +0000 (0:00:00.206) 0:00:28.689 *** 2025-09-17 15:59:25.087879 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.087891 | orchestrator | 2025-09-17 15:59:25.087903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087915 | orchestrator | Wednesday 17 September 2025 15:59:18 +0000 (0:00:00.186) 0:00:28.876 *** 2025-09-17 15:59:25.087927 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.087939 | orchestrator | 2025-09-17 15:59:25.087968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.087981 | orchestrator | Wednesday 17 September 2025 15:59:18 +0000 (0:00:00.220) 0:00:29.096 *** 2025-09-17 15:59:25.087994 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088005 | orchestrator | 2025-09-17 15:59:25.088016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088027 | orchestrator | Wednesday 17 September 2025 15:59:19 +0000 (0:00:00.191) 0:00:29.287 *** 2025-09-17 15:59:25.088037 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088048 | orchestrator | 2025-09-17 15:59:25.088058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088069 | orchestrator | Wednesday 17 September 2025 15:59:19 +0000 (0:00:00.237) 0:00:29.525 *** 2025-09-17 15:59:25.088080 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088090 | orchestrator | 2025-09-17 15:59:25.088101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088112 | orchestrator | Wednesday 17 September 2025 15:59:19 +0000 (0:00:00.185) 0:00:29.710 *** 2025-09-17 15:59:25.088122 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088132 | orchestrator | 2025-09-17 15:59:25.088143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088153 | orchestrator | Wednesday 17 September 2025 15:59:19 +0000 (0:00:00.196) 0:00:29.907 *** 2025-09-17 15:59:25.088164 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-17 15:59:25.088175 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-17 15:59:25.088185 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-17 15:59:25.088196 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-17 15:59:25.088206 | orchestrator | 2025-09-17 15:59:25.088217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088228 | orchestrator | Wednesday 17 September 2025 15:59:20 +0000 (0:00:00.762) 0:00:30.669 *** 2025-09-17 15:59:25.088247 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088258 | orchestrator | 2025-09-17 15:59:25.088268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088279 | orchestrator | Wednesday 17 September 2025 15:59:20 +0000 (0:00:00.196) 0:00:30.866 *** 2025-09-17 15:59:25.088289 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088300 | orchestrator | 2025-09-17 15:59:25.088311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088321 | orchestrator | Wednesday 17 September 2025 15:59:20 +0000 (0:00:00.169) 0:00:31.035 *** 2025-09-17 15:59:25.088332 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088343 | orchestrator | 2025-09-17 15:59:25.088353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:25.088364 | orchestrator | Wednesday 17 September 2025 15:59:21 +0000 (0:00:00.496) 0:00:31.531 *** 2025-09-17 15:59:25.088374 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088385 | orchestrator | 2025-09-17 15:59:25.088395 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 15:59:25.088406 | orchestrator | Wednesday 17 September 2025 15:59:21 +0000 (0:00:00.196) 0:00:31.728 *** 2025-09-17 15:59:25.088417 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088427 | orchestrator | 2025-09-17 15:59:25.088438 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 15:59:25.088449 | orchestrator | Wednesday 17 September 2025 15:59:21 +0000 (0:00:00.122) 0:00:31.850 *** 2025-09-17 15:59:25.088478 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '17f552da-d70b-5fe0-b76a-79be1323ddb4'}}) 2025-09-17 15:59:25.088489 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd72d4826-7802-5629-b85e-59298af53c3a'}}) 2025-09-17 15:59:25.088500 | orchestrator | 2025-09-17 15:59:25.088511 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 15:59:25.088521 | orchestrator | Wednesday 17 September 2025 15:59:21 +0000 (0:00:00.170) 0:00:32.020 *** 2025-09-17 15:59:25.088533 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'}) 2025-09-17 15:59:25.088545 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'}) 2025-09-17 15:59:25.088555 | orchestrator | 2025-09-17 15:59:25.088566 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 15:59:25.088576 | orchestrator | Wednesday 17 September 2025 15:59:23 +0000 (0:00:01.896) 0:00:33.917 *** 2025-09-17 15:59:25.088587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:25.088598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:25.088609 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:25.088620 | orchestrator | 2025-09-17 15:59:25.088630 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 15:59:25.088641 | orchestrator | Wednesday 17 September 2025 15:59:23 +0000 (0:00:00.147) 0:00:34.064 *** 2025-09-17 15:59:25.088651 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'}) 2025-09-17 15:59:25.088662 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'}) 2025-09-17 15:59:25.088673 | orchestrator | 2025-09-17 15:59:25.088691 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 15:59:29.990801 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:01.220) 0:00:35.285 *** 2025-09-17 15:59:29.990926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.990944 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.990966 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.990978 | orchestrator | 2025-09-17 15:59:29.990990 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 15:59:29.991001 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.136) 0:00:35.421 *** 2025-09-17 15:59:29.991012 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991023 | orchestrator | 2025-09-17 15:59:29.991034 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 15:59:29.991044 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.127) 0:00:35.549 *** 2025-09-17 15:59:29.991055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991081 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991092 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991103 | orchestrator | 2025-09-17 15:59:29.991114 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 15:59:29.991124 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.141) 0:00:35.691 *** 2025-09-17 15:59:29.991135 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991145 | orchestrator | 2025-09-17 15:59:29.991156 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 15:59:29.991166 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.117) 0:00:35.808 *** 2025-09-17 15:59:29.991177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991198 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991209 | orchestrator | 2025-09-17 15:59:29.991219 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 15:59:29.991230 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.128) 0:00:35.936 *** 2025-09-17 15:59:29.991241 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991251 | orchestrator | 2025-09-17 15:59:29.991266 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 15:59:29.991277 | orchestrator | Wednesday 17 September 2025 15:59:25 +0000 (0:00:00.272) 0:00:36.209 *** 2025-09-17 15:59:29.991287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991308 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991319 | orchestrator | 2025-09-17 15:59:29.991329 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 15:59:29.991340 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.136) 0:00:36.345 *** 2025-09-17 15:59:29.991352 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:29.991364 | orchestrator | 2025-09-17 15:59:29.991376 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 15:59:29.991388 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.132) 0:00:36.478 *** 2025-09-17 15:59:29.991408 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991433 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991445 | orchestrator | 2025-09-17 15:59:29.991457 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 15:59:29.991490 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.140) 0:00:36.618 *** 2025-09-17 15:59:29.991502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991526 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991538 | orchestrator | 2025-09-17 15:59:29.991550 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 15:59:29.991562 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.161) 0:00:36.779 *** 2025-09-17 15:59:29.991591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:29.991604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:29.991617 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991628 | orchestrator | 2025-09-17 15:59:29.991640 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 15:59:29.991652 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.140) 0:00:36.920 *** 2025-09-17 15:59:29.991664 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991676 | orchestrator | 2025-09-17 15:59:29.991687 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 15:59:29.991699 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.126) 0:00:37.046 *** 2025-09-17 15:59:29.991710 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991720 | orchestrator | 2025-09-17 15:59:29.991731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 15:59:29.991741 | orchestrator | Wednesday 17 September 2025 15:59:26 +0000 (0:00:00.109) 0:00:37.156 *** 2025-09-17 15:59:29.991752 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.991762 | orchestrator | 2025-09-17 15:59:29.991773 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 15:59:29.991783 | orchestrator | Wednesday 17 September 2025 15:59:27 +0000 (0:00:00.122) 0:00:37.278 *** 2025-09-17 15:59:29.991794 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:59:29.991805 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 15:59:29.991815 | orchestrator | } 2025-09-17 15:59:29.991826 | orchestrator | 2025-09-17 15:59:29.991837 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 15:59:29.991847 | orchestrator | Wednesday 17 September 2025 15:59:27 +0000 (0:00:00.132) 0:00:37.410 *** 2025-09-17 15:59:29.991858 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:59:29.991868 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 15:59:29.991879 | orchestrator | } 2025-09-17 15:59:29.991889 | orchestrator | 2025-09-17 15:59:29.991900 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 15:59:29.991910 | orchestrator | Wednesday 17 September 2025 15:59:27 +0000 (0:00:00.131) 0:00:37.542 *** 2025-09-17 15:59:29.991920 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:59:29.991931 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 15:59:29.991942 | orchestrator | } 2025-09-17 15:59:29.991959 | orchestrator | 2025-09-17 15:59:29.991969 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 15:59:29.991980 | orchestrator | Wednesday 17 September 2025 15:59:27 +0000 (0:00:00.137) 0:00:37.680 *** 2025-09-17 15:59:29.991990 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:29.992001 | orchestrator | 2025-09-17 15:59:29.992011 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 15:59:29.992022 | orchestrator | Wednesday 17 September 2025 15:59:28 +0000 (0:00:00.624) 0:00:38.304 *** 2025-09-17 15:59:29.992032 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:29.992043 | orchestrator | 2025-09-17 15:59:29.992059 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 15:59:29.992070 | orchestrator | Wednesday 17 September 2025 15:59:28 +0000 (0:00:00.441) 0:00:38.746 *** 2025-09-17 15:59:29.992080 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:29.992091 | orchestrator | 2025-09-17 15:59:29.992101 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 15:59:29.992112 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.503) 0:00:39.249 *** 2025-09-17 15:59:29.992123 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:29.992133 | orchestrator | 2025-09-17 15:59:29.992144 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 15:59:29.992154 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.133) 0:00:39.383 *** 2025-09-17 15:59:29.992164 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992175 | orchestrator | 2025-09-17 15:59:29.992185 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 15:59:29.992196 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.100) 0:00:39.484 *** 2025-09-17 15:59:29.992206 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992217 | orchestrator | 2025-09-17 15:59:29.992227 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 15:59:29.992238 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.091) 0:00:39.575 *** 2025-09-17 15:59:29.992248 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:59:29.992259 | orchestrator |  "vgs_report": { 2025-09-17 15:59:29.992269 | orchestrator |  "vg": [] 2025-09-17 15:59:29.992280 | orchestrator |  } 2025-09-17 15:59:29.992290 | orchestrator | } 2025-09-17 15:59:29.992301 | orchestrator | 2025-09-17 15:59:29.992311 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 15:59:29.992322 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.128) 0:00:39.703 *** 2025-09-17 15:59:29.992332 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992343 | orchestrator | 2025-09-17 15:59:29.992353 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 15:59:29.992364 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.114) 0:00:39.818 *** 2025-09-17 15:59:29.992374 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992385 | orchestrator | 2025-09-17 15:59:29.992395 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 15:59:29.992406 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.127) 0:00:39.946 *** 2025-09-17 15:59:29.992417 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992427 | orchestrator | 2025-09-17 15:59:29.992438 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 15:59:29.992448 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.124) 0:00:40.071 *** 2025-09-17 15:59:29.992473 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:29.992484 | orchestrator | 2025-09-17 15:59:29.992495 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 15:59:29.992511 | orchestrator | Wednesday 17 September 2025 15:59:29 +0000 (0:00:00.121) 0:00:40.192 *** 2025-09-17 15:59:34.334774 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.334847 | orchestrator | 2025-09-17 15:59:34.334859 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 15:59:34.334883 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.112) 0:00:40.305 *** 2025-09-17 15:59:34.334892 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.334900 | orchestrator | 2025-09-17 15:59:34.334908 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 15:59:34.334916 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.263) 0:00:40.569 *** 2025-09-17 15:59:34.334924 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.334931 | orchestrator | 2025-09-17 15:59:34.334939 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 15:59:34.334947 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.124) 0:00:40.694 *** 2025-09-17 15:59:34.334955 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.334963 | orchestrator | 2025-09-17 15:59:34.334971 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 15:59:34.334979 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.123) 0:00:40.817 *** 2025-09-17 15:59:34.334987 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.334994 | orchestrator | 2025-09-17 15:59:34.335002 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 15:59:34.335010 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.121) 0:00:40.939 *** 2025-09-17 15:59:34.335018 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335026 | orchestrator | 2025-09-17 15:59:34.335034 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 15:59:34.335042 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.136) 0:00:41.076 *** 2025-09-17 15:59:34.335050 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335058 | orchestrator | 2025-09-17 15:59:34.335065 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 15:59:34.335073 | orchestrator | Wednesday 17 September 2025 15:59:30 +0000 (0:00:00.121) 0:00:41.198 *** 2025-09-17 15:59:34.335081 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335089 | orchestrator | 2025-09-17 15:59:34.335097 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 15:59:34.335105 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.130) 0:00:41.328 *** 2025-09-17 15:59:34.335113 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335120 | orchestrator | 2025-09-17 15:59:34.335128 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 15:59:34.335136 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.123) 0:00:41.452 *** 2025-09-17 15:59:34.335144 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335152 | orchestrator | 2025-09-17 15:59:34.335159 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 15:59:34.335167 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.132) 0:00:41.584 *** 2025-09-17 15:59:34.335185 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335202 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335211 | orchestrator | 2025-09-17 15:59:34.335219 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 15:59:34.335227 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.146) 0:00:41.731 *** 2025-09-17 15:59:34.335235 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335256 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335264 | orchestrator | 2025-09-17 15:59:34.335272 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 15:59:34.335280 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.141) 0:00:41.873 *** 2025-09-17 15:59:34.335288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335296 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335304 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335311 | orchestrator | 2025-09-17 15:59:34.335319 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 15:59:34.335328 | orchestrator | Wednesday 17 September 2025 15:59:31 +0000 (0:00:00.149) 0:00:42.022 *** 2025-09-17 15:59:34.335337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335355 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335364 | orchestrator | 2025-09-17 15:59:34.335374 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 15:59:34.335397 | orchestrator | Wednesday 17 September 2025 15:59:32 +0000 (0:00:00.273) 0:00:42.296 *** 2025-09-17 15:59:34.335407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335426 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335435 | orchestrator | 2025-09-17 15:59:34.335444 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 15:59:34.335453 | orchestrator | Wednesday 17 September 2025 15:59:32 +0000 (0:00:00.132) 0:00:42.428 *** 2025-09-17 15:59:34.335487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335504 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335513 | orchestrator | 2025-09-17 15:59:34.335522 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 15:59:34.335531 | orchestrator | Wednesday 17 September 2025 15:59:32 +0000 (0:00:00.142) 0:00:42.571 *** 2025-09-17 15:59:34.335539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335557 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335566 | orchestrator | 2025-09-17 15:59:34.335575 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 15:59:34.335583 | orchestrator | Wednesday 17 September 2025 15:59:32 +0000 (0:00:00.150) 0:00:42.721 *** 2025-09-17 15:59:34.335592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335615 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335624 | orchestrator | 2025-09-17 15:59:34.335632 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 15:59:34.335667 | orchestrator | Wednesday 17 September 2025 15:59:32 +0000 (0:00:00.134) 0:00:42.855 *** 2025-09-17 15:59:34.335676 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:34.335685 | orchestrator | 2025-09-17 15:59:34.335694 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 15:59:34.335702 | orchestrator | Wednesday 17 September 2025 15:59:33 +0000 (0:00:00.563) 0:00:43.419 *** 2025-09-17 15:59:34.335710 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:34.335717 | orchestrator | 2025-09-17 15:59:34.335725 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 15:59:34.335733 | orchestrator | Wednesday 17 September 2025 15:59:33 +0000 (0:00:00.522) 0:00:43.941 *** 2025-09-17 15:59:34.335740 | orchestrator | ok: [testbed-node-4] 2025-09-17 15:59:34.335748 | orchestrator | 2025-09-17 15:59:34.335755 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 15:59:34.335763 | orchestrator | Wednesday 17 September 2025 15:59:33 +0000 (0:00:00.146) 0:00:44.088 *** 2025-09-17 15:59:34.335771 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'vg_name': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'}) 2025-09-17 15:59:34.335780 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'vg_name': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'}) 2025-09-17 15:59:34.335787 | orchestrator | 2025-09-17 15:59:34.335795 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 15:59:34.335802 | orchestrator | Wednesday 17 September 2025 15:59:34 +0000 (0:00:00.153) 0:00:44.241 *** 2025-09-17 15:59:34.335810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335826 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:34.335833 | orchestrator | 2025-09-17 15:59:34.335841 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 15:59:34.335849 | orchestrator | Wednesday 17 September 2025 15:59:34 +0000 (0:00:00.155) 0:00:44.397 *** 2025-09-17 15:59:34.335856 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:34.335864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:34.335877 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:39.787092 | orchestrator | 2025-09-17 15:59:39.787192 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 15:59:39.787210 | orchestrator | Wednesday 17 September 2025 15:59:34 +0000 (0:00:00.137) 0:00:44.534 *** 2025-09-17 15:59:39.787222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'})  2025-09-17 15:59:39.787234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'})  2025-09-17 15:59:39.787245 | orchestrator | skipping: [testbed-node-4] 2025-09-17 15:59:39.787256 | orchestrator | 2025-09-17 15:59:39.787268 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 15:59:39.787278 | orchestrator | Wednesday 17 September 2025 15:59:34 +0000 (0:00:00.138) 0:00:44.673 *** 2025-09-17 15:59:39.787307 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 15:59:39.787318 | orchestrator |  "lvm_report": { 2025-09-17 15:59:39.787330 | orchestrator |  "lv": [ 2025-09-17 15:59:39.787341 | orchestrator |  { 2025-09-17 15:59:39.787352 | orchestrator |  "lv_name": "osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4", 2025-09-17 15:59:39.787364 | orchestrator |  "vg_name": "ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4" 2025-09-17 15:59:39.787374 | orchestrator |  }, 2025-09-17 15:59:39.787385 | orchestrator |  { 2025-09-17 15:59:39.787395 | orchestrator |  "lv_name": "osd-block-d72d4826-7802-5629-b85e-59298af53c3a", 2025-09-17 15:59:39.787406 | orchestrator |  "vg_name": "ceph-d72d4826-7802-5629-b85e-59298af53c3a" 2025-09-17 15:59:39.787416 | orchestrator |  } 2025-09-17 15:59:39.787427 | orchestrator |  ], 2025-09-17 15:59:39.787437 | orchestrator |  "pv": [ 2025-09-17 15:59:39.787448 | orchestrator |  { 2025-09-17 15:59:39.787503 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 15:59:39.787516 | orchestrator |  "vg_name": "ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4" 2025-09-17 15:59:39.787527 | orchestrator |  }, 2025-09-17 15:59:39.787538 | orchestrator |  { 2025-09-17 15:59:39.787549 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 15:59:39.787559 | orchestrator |  "vg_name": "ceph-d72d4826-7802-5629-b85e-59298af53c3a" 2025-09-17 15:59:39.787570 | orchestrator |  } 2025-09-17 15:59:39.787580 | orchestrator |  ] 2025-09-17 15:59:39.787591 | orchestrator |  } 2025-09-17 15:59:39.787602 | orchestrator | } 2025-09-17 15:59:39.787613 | orchestrator | 2025-09-17 15:59:39.787624 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-17 15:59:39.787635 | orchestrator | 2025-09-17 15:59:39.787645 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 15:59:39.787656 | orchestrator | Wednesday 17 September 2025 15:59:34 +0000 (0:00:00.399) 0:00:45.072 *** 2025-09-17 15:59:39.787667 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-17 15:59:39.787678 | orchestrator | 2025-09-17 15:59:39.787700 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-17 15:59:39.787712 | orchestrator | Wednesday 17 September 2025 15:59:35 +0000 (0:00:00.225) 0:00:45.298 *** 2025-09-17 15:59:39.787722 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:39.787733 | orchestrator | 2025-09-17 15:59:39.787744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.787755 | orchestrator | Wednesday 17 September 2025 15:59:35 +0000 (0:00:00.216) 0:00:45.514 *** 2025-09-17 15:59:39.787766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-17 15:59:39.787776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-17 15:59:39.787787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-17 15:59:39.787798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-17 15:59:39.787808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-17 15:59:39.787819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-17 15:59:39.787829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-17 15:59:39.787840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-17 15:59:39.787851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-17 15:59:39.787861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-17 15:59:39.787871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-17 15:59:39.787889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-17 15:59:39.787900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-17 15:59:39.787911 | orchestrator | 2025-09-17 15:59:39.787921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.787932 | orchestrator | Wednesday 17 September 2025 15:59:35 +0000 (0:00:00.381) 0:00:45.895 *** 2025-09-17 15:59:39.787942 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.787953 | orchestrator | 2025-09-17 15:59:39.787968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.787979 | orchestrator | Wednesday 17 September 2025 15:59:35 +0000 (0:00:00.187) 0:00:46.083 *** 2025-09-17 15:59:39.787989 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788000 | orchestrator | 2025-09-17 15:59:39.788010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788038 | orchestrator | Wednesday 17 September 2025 15:59:36 +0000 (0:00:00.195) 0:00:46.278 *** 2025-09-17 15:59:39.788050 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788060 | orchestrator | 2025-09-17 15:59:39.788071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788081 | orchestrator | Wednesday 17 September 2025 15:59:36 +0000 (0:00:00.184) 0:00:46.463 *** 2025-09-17 15:59:39.788092 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788102 | orchestrator | 2025-09-17 15:59:39.788113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788123 | orchestrator | Wednesday 17 September 2025 15:59:36 +0000 (0:00:00.185) 0:00:46.649 *** 2025-09-17 15:59:39.788134 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788144 | orchestrator | 2025-09-17 15:59:39.788155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788165 | orchestrator | Wednesday 17 September 2025 15:59:36 +0000 (0:00:00.183) 0:00:46.833 *** 2025-09-17 15:59:39.788176 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788186 | orchestrator | 2025-09-17 15:59:39.788197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788207 | orchestrator | Wednesday 17 September 2025 15:59:37 +0000 (0:00:00.477) 0:00:47.310 *** 2025-09-17 15:59:39.788218 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788228 | orchestrator | 2025-09-17 15:59:39.788239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788249 | orchestrator | Wednesday 17 September 2025 15:59:37 +0000 (0:00:00.190) 0:00:47.500 *** 2025-09-17 15:59:39.788260 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:39.788270 | orchestrator | 2025-09-17 15:59:39.788281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788291 | orchestrator | Wednesday 17 September 2025 15:59:37 +0000 (0:00:00.200) 0:00:47.700 *** 2025-09-17 15:59:39.788302 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767) 2025-09-17 15:59:39.788313 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767) 2025-09-17 15:59:39.788324 | orchestrator | 2025-09-17 15:59:39.788335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788345 | orchestrator | Wednesday 17 September 2025 15:59:37 +0000 (0:00:00.390) 0:00:48.091 *** 2025-09-17 15:59:39.788356 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16) 2025-09-17 15:59:39.788366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16) 2025-09-17 15:59:39.788377 | orchestrator | 2025-09-17 15:59:39.788387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788398 | orchestrator | Wednesday 17 September 2025 15:59:38 +0000 (0:00:00.415) 0:00:48.506 *** 2025-09-17 15:59:39.788413 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3) 2025-09-17 15:59:39.788433 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3) 2025-09-17 15:59:39.788444 | orchestrator | 2025-09-17 15:59:39.788455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788484 | orchestrator | Wednesday 17 September 2025 15:59:38 +0000 (0:00:00.379) 0:00:48.886 *** 2025-09-17 15:59:39.788494 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae) 2025-09-17 15:59:39.788505 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae) 2025-09-17 15:59:39.788516 | orchestrator | 2025-09-17 15:59:39.788526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-17 15:59:39.788537 | orchestrator | Wednesday 17 September 2025 15:59:39 +0000 (0:00:00.390) 0:00:49.277 *** 2025-09-17 15:59:39.788547 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-17 15:59:39.788558 | orchestrator | 2025-09-17 15:59:39.788568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:39.788579 | orchestrator | Wednesday 17 September 2025 15:59:39 +0000 (0:00:00.300) 0:00:49.577 *** 2025-09-17 15:59:39.788589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-17 15:59:39.788600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-17 15:59:39.788610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-17 15:59:39.788621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-17 15:59:39.788631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-17 15:59:39.788642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-17 15:59:39.788652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-17 15:59:39.788663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-17 15:59:39.788673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-17 15:59:39.788684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-17 15:59:39.788694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-17 15:59:39.788712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-17 15:59:48.143171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-17 15:59:48.143267 | orchestrator | 2025-09-17 15:59:48.143283 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143296 | orchestrator | Wednesday 17 September 2025 15:59:39 +0000 (0:00:00.396) 0:00:49.973 *** 2025-09-17 15:59:48.143307 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143319 | orchestrator | 2025-09-17 15:59:48.143330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143341 | orchestrator | Wednesday 17 September 2025 15:59:39 +0000 (0:00:00.178) 0:00:50.151 *** 2025-09-17 15:59:48.143352 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143362 | orchestrator | 2025-09-17 15:59:48.143373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143384 | orchestrator | Wednesday 17 September 2025 15:59:40 +0000 (0:00:00.189) 0:00:50.341 *** 2025-09-17 15:59:48.143395 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143405 | orchestrator | 2025-09-17 15:59:48.143416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143427 | orchestrator | Wednesday 17 September 2025 15:59:40 +0000 (0:00:00.478) 0:00:50.820 *** 2025-09-17 15:59:48.143457 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143517 | orchestrator | 2025-09-17 15:59:48.143528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143538 | orchestrator | Wednesday 17 September 2025 15:59:40 +0000 (0:00:00.176) 0:00:50.997 *** 2025-09-17 15:59:48.143549 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143560 | orchestrator | 2025-09-17 15:59:48.143570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143581 | orchestrator | Wednesday 17 September 2025 15:59:41 +0000 (0:00:00.213) 0:00:51.210 *** 2025-09-17 15:59:48.143591 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143602 | orchestrator | 2025-09-17 15:59:48.143613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143623 | orchestrator | Wednesday 17 September 2025 15:59:41 +0000 (0:00:00.184) 0:00:51.395 *** 2025-09-17 15:59:48.143634 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143644 | orchestrator | 2025-09-17 15:59:48.143655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143666 | orchestrator | Wednesday 17 September 2025 15:59:41 +0000 (0:00:00.181) 0:00:51.576 *** 2025-09-17 15:59:48.143677 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143687 | orchestrator | 2025-09-17 15:59:48.143698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143708 | orchestrator | Wednesday 17 September 2025 15:59:41 +0000 (0:00:00.183) 0:00:51.759 *** 2025-09-17 15:59:48.143719 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-17 15:59:48.143732 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-17 15:59:48.143744 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-17 15:59:48.143756 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-17 15:59:48.143768 | orchestrator | 2025-09-17 15:59:48.143780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143791 | orchestrator | Wednesday 17 September 2025 15:59:42 +0000 (0:00:00.580) 0:00:52.340 *** 2025-09-17 15:59:48.143803 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143815 | orchestrator | 2025-09-17 15:59:48.143827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143838 | orchestrator | Wednesday 17 September 2025 15:59:42 +0000 (0:00:00.184) 0:00:52.525 *** 2025-09-17 15:59:48.143850 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143862 | orchestrator | 2025-09-17 15:59:48.143874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143886 | orchestrator | Wednesday 17 September 2025 15:59:42 +0000 (0:00:00.218) 0:00:52.743 *** 2025-09-17 15:59:48.143898 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143910 | orchestrator | 2025-09-17 15:59:48.143922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-17 15:59:48.143933 | orchestrator | Wednesday 17 September 2025 15:59:42 +0000 (0:00:00.175) 0:00:52.919 *** 2025-09-17 15:59:48.143945 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.143957 | orchestrator | 2025-09-17 15:59:48.143968 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-17 15:59:48.143980 | orchestrator | Wednesday 17 September 2025 15:59:42 +0000 (0:00:00.179) 0:00:53.098 *** 2025-09-17 15:59:48.143992 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144004 | orchestrator | 2025-09-17 15:59:48.144016 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-17 15:59:48.144028 | orchestrator | Wednesday 17 September 2025 15:59:43 +0000 (0:00:00.266) 0:00:53.365 *** 2025-09-17 15:59:48.144040 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}}) 2025-09-17 15:59:48.144052 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ce5409dd-a4db-5391-81df-07600c6136f3'}}) 2025-09-17 15:59:48.144071 | orchestrator | 2025-09-17 15:59:48.144083 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-17 15:59:48.144094 | orchestrator | Wednesday 17 September 2025 15:59:43 +0000 (0:00:00.188) 0:00:53.553 *** 2025-09-17 15:59:48.144105 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}) 2025-09-17 15:59:48.144117 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'}) 2025-09-17 15:59:48.144128 | orchestrator | 2025-09-17 15:59:48.144138 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-17 15:59:48.144165 | orchestrator | Wednesday 17 September 2025 15:59:45 +0000 (0:00:01.832) 0:00:55.386 *** 2025-09-17 15:59:48.144177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:48.144189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:48.144199 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144210 | orchestrator | 2025-09-17 15:59:48.144221 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-17 15:59:48.144232 | orchestrator | Wednesday 17 September 2025 15:59:45 +0000 (0:00:00.152) 0:00:55.538 *** 2025-09-17 15:59:48.144242 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}) 2025-09-17 15:59:48.144269 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'}) 2025-09-17 15:59:48.144281 | orchestrator | 2025-09-17 15:59:48.144292 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-17 15:59:48.144303 | orchestrator | Wednesday 17 September 2025 15:59:46 +0000 (0:00:01.359) 0:00:56.897 *** 2025-09-17 15:59:48.144314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:48.144325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:48.144336 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144346 | orchestrator | 2025-09-17 15:59:48.144357 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-17 15:59:48.144368 | orchestrator | Wednesday 17 September 2025 15:59:46 +0000 (0:00:00.142) 0:00:57.039 *** 2025-09-17 15:59:48.144379 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144389 | orchestrator | 2025-09-17 15:59:48.144400 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-17 15:59:48.144411 | orchestrator | Wednesday 17 September 2025 15:59:46 +0000 (0:00:00.163) 0:00:57.203 *** 2025-09-17 15:59:48.144422 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:48.144437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:48.144448 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144475 | orchestrator | 2025-09-17 15:59:48.144486 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-17 15:59:48.144497 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.145) 0:00:57.349 *** 2025-09-17 15:59:48.144507 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144518 | orchestrator | 2025-09-17 15:59:48.144528 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-17 15:59:48.144546 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.152) 0:00:57.502 *** 2025-09-17 15:59:48.144557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:48.144568 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:48.144579 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144589 | orchestrator | 2025-09-17 15:59:48.144600 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-17 15:59:48.144610 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.144) 0:00:57.646 *** 2025-09-17 15:59:48.144621 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144632 | orchestrator | 2025-09-17 15:59:48.144642 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-17 15:59:48.144653 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.111) 0:00:57.758 *** 2025-09-17 15:59:48.144663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:48.144674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:48.144685 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:48.144696 | orchestrator | 2025-09-17 15:59:48.144706 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-17 15:59:48.144717 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.154) 0:00:57.913 *** 2025-09-17 15:59:48.144728 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:48.144738 | orchestrator | 2025-09-17 15:59:48.144749 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-17 15:59:48.144759 | orchestrator | Wednesday 17 September 2025 15:59:47 +0000 (0:00:00.127) 0:00:58.040 *** 2025-09-17 15:59:48.144777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:54.156268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:54.156373 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156387 | orchestrator | 2025-09-17 15:59:54.156400 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-17 15:59:54.156413 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.305) 0:00:58.346 *** 2025-09-17 15:59:54.156424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:54.156435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:54.156446 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156456 | orchestrator | 2025-09-17 15:59:54.156493 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-17 15:59:54.156505 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.152) 0:00:58.498 *** 2025-09-17 15:59:54.156516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:54.156527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:54.156537 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156548 | orchestrator | 2025-09-17 15:59:54.156581 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-17 15:59:54.156592 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.155) 0:00:58.654 *** 2025-09-17 15:59:54.156603 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156613 | orchestrator | 2025-09-17 15:59:54.156624 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-17 15:59:54.156634 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.123) 0:00:58.777 *** 2025-09-17 15:59:54.156644 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156655 | orchestrator | 2025-09-17 15:59:54.156665 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-17 15:59:54.156676 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.133) 0:00:58.911 *** 2025-09-17 15:59:54.156686 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.156697 | orchestrator | 2025-09-17 15:59:54.156707 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-17 15:59:54.156731 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.124) 0:00:59.035 *** 2025-09-17 15:59:54.156742 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:59:54.156753 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-17 15:59:54.156778 | orchestrator | } 2025-09-17 15:59:54.156791 | orchestrator | 2025-09-17 15:59:54.156804 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-17 15:59:54.156827 | orchestrator | Wednesday 17 September 2025 15:59:48 +0000 (0:00:00.122) 0:00:59.157 *** 2025-09-17 15:59:54.156839 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:59:54.156851 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-17 15:59:54.156863 | orchestrator | } 2025-09-17 15:59:54.156874 | orchestrator | 2025-09-17 15:59:54.156886 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-17 15:59:54.156897 | orchestrator | Wednesday 17 September 2025 15:59:49 +0000 (0:00:00.120) 0:00:59.278 *** 2025-09-17 15:59:54.156909 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:59:54.156921 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-17 15:59:54.156933 | orchestrator | } 2025-09-17 15:59:54.156946 | orchestrator | 2025-09-17 15:59:54.156958 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-17 15:59:54.156970 | orchestrator | Wednesday 17 September 2025 15:59:49 +0000 (0:00:00.116) 0:00:59.394 *** 2025-09-17 15:59:54.156981 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:54.156993 | orchestrator | 2025-09-17 15:59:54.157004 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-17 15:59:54.157016 | orchestrator | Wednesday 17 September 2025 15:59:49 +0000 (0:00:00.507) 0:00:59.902 *** 2025-09-17 15:59:54.157028 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:54.157039 | orchestrator | 2025-09-17 15:59:54.157051 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-17 15:59:54.157063 | orchestrator | Wednesday 17 September 2025 15:59:50 +0000 (0:00:00.504) 0:01:00.406 *** 2025-09-17 15:59:54.157074 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:54.157086 | orchestrator | 2025-09-17 15:59:54.157098 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-17 15:59:54.157110 | orchestrator | Wednesday 17 September 2025 15:59:50 +0000 (0:00:00.522) 0:01:00.928 *** 2025-09-17 15:59:54.157122 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:54.157133 | orchestrator | 2025-09-17 15:59:54.157145 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-17 15:59:54.157157 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.380) 0:01:01.308 *** 2025-09-17 15:59:54.157169 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157179 | orchestrator | 2025-09-17 15:59:54.157190 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-17 15:59:54.157200 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.121) 0:01:01.430 *** 2025-09-17 15:59:54.157211 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157232 | orchestrator | 2025-09-17 15:59:54.157243 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-17 15:59:54.157253 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.111) 0:01:01.541 *** 2025-09-17 15:59:54.157264 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:59:54.157274 | orchestrator |  "vgs_report": { 2025-09-17 15:59:54.157285 | orchestrator |  "vg": [] 2025-09-17 15:59:54.157312 | orchestrator |  } 2025-09-17 15:59:54.157324 | orchestrator | } 2025-09-17 15:59:54.157334 | orchestrator | 2025-09-17 15:59:54.157345 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-17 15:59:54.157355 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.163) 0:01:01.705 *** 2025-09-17 15:59:54.157366 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157376 | orchestrator | 2025-09-17 15:59:54.157387 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-17 15:59:54.157397 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.137) 0:01:01.843 *** 2025-09-17 15:59:54.157408 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157418 | orchestrator | 2025-09-17 15:59:54.157428 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-17 15:59:54.157439 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.137) 0:01:01.980 *** 2025-09-17 15:59:54.157449 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157531 | orchestrator | 2025-09-17 15:59:54.157545 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-17 15:59:54.157555 | orchestrator | Wednesday 17 September 2025 15:59:51 +0000 (0:00:00.133) 0:01:02.114 *** 2025-09-17 15:59:54.157566 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157577 | orchestrator | 2025-09-17 15:59:54.157587 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-17 15:59:54.157598 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.144) 0:01:02.259 *** 2025-09-17 15:59:54.157608 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157618 | orchestrator | 2025-09-17 15:59:54.157629 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-17 15:59:54.157639 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.136) 0:01:02.395 *** 2025-09-17 15:59:54.157650 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157660 | orchestrator | 2025-09-17 15:59:54.157671 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-17 15:59:54.157681 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.137) 0:01:02.532 *** 2025-09-17 15:59:54.157691 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157702 | orchestrator | 2025-09-17 15:59:54.157712 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-17 15:59:54.157723 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.130) 0:01:02.663 *** 2025-09-17 15:59:54.157733 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157744 | orchestrator | 2025-09-17 15:59:54.157754 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-17 15:59:54.157765 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.144) 0:01:02.807 *** 2025-09-17 15:59:54.157775 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157785 | orchestrator | 2025-09-17 15:59:54.157796 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-17 15:59:54.157806 | orchestrator | Wednesday 17 September 2025 15:59:52 +0000 (0:00:00.375) 0:01:03.182 *** 2025-09-17 15:59:54.157823 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157834 | orchestrator | 2025-09-17 15:59:54.157845 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-17 15:59:54.157855 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.140) 0:01:03.322 *** 2025-09-17 15:59:54.157865 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157876 | orchestrator | 2025-09-17 15:59:54.157886 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-17 15:59:54.157905 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.148) 0:01:03.471 *** 2025-09-17 15:59:54.157915 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157926 | orchestrator | 2025-09-17 15:59:54.157936 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-17 15:59:54.157947 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.136) 0:01:03.608 *** 2025-09-17 15:59:54.157957 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.157968 | orchestrator | 2025-09-17 15:59:54.157979 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-17 15:59:54.157989 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.144) 0:01:03.753 *** 2025-09-17 15:59:54.158000 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.158010 | orchestrator | 2025-09-17 15:59:54.158082 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-17 15:59:54.158094 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.137) 0:01:03.890 *** 2025-09-17 15:59:54.158105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:54.158116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:54.158126 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.158137 | orchestrator | 2025-09-17 15:59:54.158148 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-17 15:59:54.158158 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.165) 0:01:04.056 *** 2025-09-17 15:59:54.158169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:54.158179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:54.158190 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:54.158200 | orchestrator | 2025-09-17 15:59:54.158211 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-17 15:59:54.158221 | orchestrator | Wednesday 17 September 2025 15:59:53 +0000 (0:00:00.152) 0:01:04.208 *** 2025-09-17 15:59:54.158241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957582 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957598 | orchestrator | 2025-09-17 15:59:56.957610 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-17 15:59:56.957622 | orchestrator | Wednesday 17 September 2025 15:59:54 +0000 (0:00:00.149) 0:01:04.357 *** 2025-09-17 15:59:56.957634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957644 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957655 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957666 | orchestrator | 2025-09-17 15:59:56.957677 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-17 15:59:56.957687 | orchestrator | Wednesday 17 September 2025 15:59:54 +0000 (0:00:00.151) 0:01:04.509 *** 2025-09-17 15:59:56.957698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957742 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957752 | orchestrator | 2025-09-17 15:59:56.957763 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-17 15:59:56.957773 | orchestrator | Wednesday 17 September 2025 15:59:54 +0000 (0:00:00.163) 0:01:04.673 *** 2025-09-17 15:59:56.957784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957805 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957815 | orchestrator | 2025-09-17 15:59:56.957826 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-17 15:59:56.957837 | orchestrator | Wednesday 17 September 2025 15:59:54 +0000 (0:00:00.164) 0:01:04.838 *** 2025-09-17 15:59:56.957848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957869 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957879 | orchestrator | 2025-09-17 15:59:56.957890 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-17 15:59:56.957900 | orchestrator | Wednesday 17 September 2025 15:59:54 +0000 (0:00:00.295) 0:01:05.133 *** 2025-09-17 15:59:56.957911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.957922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.957932 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.957943 | orchestrator | 2025-09-17 15:59:56.957953 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-17 15:59:56.957964 | orchestrator | Wednesday 17 September 2025 15:59:55 +0000 (0:00:00.148) 0:01:05.282 *** 2025-09-17 15:59:56.957975 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:56.957987 | orchestrator | 2025-09-17 15:59:56.957999 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-17 15:59:56.958011 | orchestrator | Wednesday 17 September 2025 15:59:55 +0000 (0:00:00.487) 0:01:05.770 *** 2025-09-17 15:59:56.958083 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:56.958096 | orchestrator | 2025-09-17 15:59:56.958108 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-17 15:59:56.958120 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.485) 0:01:06.255 *** 2025-09-17 15:59:56.958133 | orchestrator | ok: [testbed-node-5] 2025-09-17 15:59:56.958144 | orchestrator | 2025-09-17 15:59:56.958156 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-17 15:59:56.958169 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.139) 0:01:06.395 *** 2025-09-17 15:59:56.958180 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'vg_name': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}) 2025-09-17 15:59:56.958193 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'vg_name': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'}) 2025-09-17 15:59:56.958205 | orchestrator | 2025-09-17 15:59:56.958217 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-17 15:59:56.958237 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.159) 0:01:06.555 *** 2025-09-17 15:59:56.958266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.958279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.958291 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.958303 | orchestrator | 2025-09-17 15:59:56.958315 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-17 15:59:56.958327 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.143) 0:01:06.699 *** 2025-09-17 15:59:56.958339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.958351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.958362 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.958373 | orchestrator | 2025-09-17 15:59:56.958384 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-17 15:59:56.958395 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.148) 0:01:06.847 *** 2025-09-17 15:59:56.958405 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'})  2025-09-17 15:59:56.958431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'})  2025-09-17 15:59:56.958443 | orchestrator | skipping: [testbed-node-5] 2025-09-17 15:59:56.958453 | orchestrator | 2025-09-17 15:59:56.958483 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-17 15:59:56.958494 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.138) 0:01:06.986 *** 2025-09-17 15:59:56.958505 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 15:59:56.958516 | orchestrator |  "lvm_report": { 2025-09-17 15:59:56.958526 | orchestrator |  "lv": [ 2025-09-17 15:59:56.958537 | orchestrator |  { 2025-09-17 15:59:56.958548 | orchestrator |  "lv_name": "osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133", 2025-09-17 15:59:56.958559 | orchestrator |  "vg_name": "ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133" 2025-09-17 15:59:56.958569 | orchestrator |  }, 2025-09-17 15:59:56.958585 | orchestrator |  { 2025-09-17 15:59:56.958596 | orchestrator |  "lv_name": "osd-block-ce5409dd-a4db-5391-81df-07600c6136f3", 2025-09-17 15:59:56.958607 | orchestrator |  "vg_name": "ceph-ce5409dd-a4db-5391-81df-07600c6136f3" 2025-09-17 15:59:56.958617 | orchestrator |  } 2025-09-17 15:59:56.958628 | orchestrator |  ], 2025-09-17 15:59:56.958638 | orchestrator |  "pv": [ 2025-09-17 15:59:56.958649 | orchestrator |  { 2025-09-17 15:59:56.958659 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-17 15:59:56.958670 | orchestrator |  "vg_name": "ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133" 2025-09-17 15:59:56.958681 | orchestrator |  }, 2025-09-17 15:59:56.958692 | orchestrator |  { 2025-09-17 15:59:56.958702 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-17 15:59:56.958713 | orchestrator |  "vg_name": "ceph-ce5409dd-a4db-5391-81df-07600c6136f3" 2025-09-17 15:59:56.958724 | orchestrator |  } 2025-09-17 15:59:56.958734 | orchestrator |  ] 2025-09-17 15:59:56.958745 | orchestrator |  } 2025-09-17 15:59:56.958756 | orchestrator | } 2025-09-17 15:59:56.958766 | orchestrator | 2025-09-17 15:59:56.958777 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 15:59:56.958788 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 15:59:56.958806 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 15:59:56.958817 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-17 15:59:56.958828 | orchestrator | 2025-09-17 15:59:56.958839 | orchestrator | 2025-09-17 15:59:56.958849 | orchestrator | 2025-09-17 15:59:56.958860 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 15:59:56.958871 | orchestrator | Wednesday 17 September 2025 15:59:56 +0000 (0:00:00.138) 0:01:07.125 *** 2025-09-17 15:59:56.958881 | orchestrator | =============================================================================== 2025-09-17 15:59:56.958892 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2025-09-17 15:59:56.958903 | orchestrator | Create block LVs -------------------------------------------------------- 3.97s 2025-09-17 15:59:56.958913 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2025-09-17 15:59:56.958924 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-09-17 15:59:56.958934 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2025-09-17 15:59:56.958945 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.48s 2025-09-17 15:59:56.958956 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.44s 2025-09-17 15:59:56.958966 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2025-09-17 15:59:56.958983 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-09-17 15:59:57.217714 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-09-17 15:59:57.217783 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2025-09-17 15:59:57.217795 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-17 15:59:57.217805 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.68s 2025-09-17 15:59:57.217814 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2025-09-17 15:59:57.217824 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.65s 2025-09-17 15:59:57.217833 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.63s 2025-09-17 15:59:57.217843 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-17 15:59:57.217852 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.60s 2025-09-17 15:59:57.217861 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-09-17 15:59:57.217871 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.58s 2025-09-17 16:00:09.101495 | orchestrator | 2025-09-17 16:00:09 | INFO  | Task b38f4d4c-7e38-4bc4-923e-566686024f14 (facts) was prepared for execution. 2025-09-17 16:00:09.101572 | orchestrator | 2025-09-17 16:00:09 | INFO  | It takes a moment until task b38f4d4c-7e38-4bc4-923e-566686024f14 (facts) has been started and output is visible here. 2025-09-17 16:00:20.305018 | orchestrator | 2025-09-17 16:00:20.305120 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-17 16:00:20.305137 | orchestrator | 2025-09-17 16:00:20.305149 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 16:00:20.305160 | orchestrator | Wednesday 17 September 2025 16:00:12 +0000 (0:00:00.247) 0:00:00.248 *** 2025-09-17 16:00:20.305171 | orchestrator | ok: [testbed-manager] 2025-09-17 16:00:20.305182 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:00:20.305193 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:00:20.305229 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:00:20.305241 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:00:20.305251 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:00:20.305261 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:00:20.305271 | orchestrator | 2025-09-17 16:00:20.305282 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 16:00:20.305293 | orchestrator | Wednesday 17 September 2025 16:00:13 +0000 (0:00:00.977) 0:00:01.225 *** 2025-09-17 16:00:20.305303 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:00:20.305326 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:00:20.305337 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:00:20.305348 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:00:20.305359 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:00:20.305369 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:00:20.305380 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:00:20.305390 | orchestrator | 2025-09-17 16:00:20.305401 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 16:00:20.305411 | orchestrator | 2025-09-17 16:00:20.305422 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 16:00:20.305432 | orchestrator | Wednesday 17 September 2025 16:00:14 +0000 (0:00:00.919) 0:00:02.145 *** 2025-09-17 16:00:20.305443 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:00:20.305453 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:00:20.305487 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:00:20.305498 | orchestrator | ok: [testbed-manager] 2025-09-17 16:00:20.305509 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:00:20.305519 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:00:20.305529 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:00:20.305540 | orchestrator | 2025-09-17 16:00:20.305551 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 16:00:20.305561 | orchestrator | 2025-09-17 16:00:20.305573 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 16:00:20.305586 | orchestrator | Wednesday 17 September 2025 16:00:19 +0000 (0:00:04.880) 0:00:07.025 *** 2025-09-17 16:00:20.305598 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:00:20.305609 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:00:20.305621 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:00:20.305633 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:00:20.305645 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:00:20.305657 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:00:20.305668 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:00:20.305680 | orchestrator | 2025-09-17 16:00:20.305692 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:00:20.305705 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305717 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305730 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305742 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305754 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305766 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305779 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:00:20.305791 | orchestrator | 2025-09-17 16:00:20.305803 | orchestrator | 2025-09-17 16:00:20.305825 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:00:20.305837 | orchestrator | Wednesday 17 September 2025 16:00:20 +0000 (0:00:00.460) 0:00:07.486 *** 2025-09-17 16:00:20.305849 | orchestrator | =============================================================================== 2025-09-17 16:00:20.305862 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.88s 2025-09-17 16:00:20.305874 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-09-17 16:00:20.305886 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.92s 2025-09-17 16:00:20.305896 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-09-17 16:00:32.145401 | orchestrator | 2025-09-17 16:00:32 | INFO  | Task 9fa36646-037a-403f-a337-753f8d8871d5 (frr) was prepared for execution. 2025-09-17 16:00:32.145521 | orchestrator | 2025-09-17 16:00:32 | INFO  | It takes a moment until task 9fa36646-037a-403f-a337-753f8d8871d5 (frr) has been started and output is visible here. 2025-09-17 16:00:57.471984 | orchestrator | 2025-09-17 16:00:57.472098 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-17 16:00:57.472116 | orchestrator | 2025-09-17 16:00:57.472129 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-17 16:00:57.472142 | orchestrator | Wednesday 17 September 2025 16:00:36 +0000 (0:00:00.229) 0:00:00.229 *** 2025-09-17 16:00:57.472153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 16:00:57.472166 | orchestrator | 2025-09-17 16:00:57.472178 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-17 16:00:57.472188 | orchestrator | Wednesday 17 September 2025 16:00:36 +0000 (0:00:00.220) 0:00:00.450 *** 2025-09-17 16:00:57.472199 | orchestrator | changed: [testbed-manager] 2025-09-17 16:00:57.472211 | orchestrator | 2025-09-17 16:00:57.472222 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-17 16:00:57.472233 | orchestrator | Wednesday 17 September 2025 16:00:37 +0000 (0:00:01.179) 0:00:01.629 *** 2025-09-17 16:00:57.472243 | orchestrator | changed: [testbed-manager] 2025-09-17 16:00:57.472254 | orchestrator | 2025-09-17 16:00:57.472265 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-17 16:00:57.472292 | orchestrator | Wednesday 17 September 2025 16:00:47 +0000 (0:00:09.512) 0:00:11.142 *** 2025-09-17 16:00:57.472303 | orchestrator | ok: [testbed-manager] 2025-09-17 16:00:57.472315 | orchestrator | 2025-09-17 16:00:57.472326 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-17 16:00:57.472337 | orchestrator | Wednesday 17 September 2025 16:00:48 +0000 (0:00:01.234) 0:00:12.377 *** 2025-09-17 16:00:57.472347 | orchestrator | changed: [testbed-manager] 2025-09-17 16:00:57.472358 | orchestrator | 2025-09-17 16:00:57.472369 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-17 16:00:57.472380 | orchestrator | Wednesday 17 September 2025 16:00:49 +0000 (0:00:00.842) 0:00:13.219 *** 2025-09-17 16:00:57.472390 | orchestrator | ok: [testbed-manager] 2025-09-17 16:00:57.472401 | orchestrator | 2025-09-17 16:00:57.472412 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-17 16:00:57.472424 | orchestrator | Wednesday 17 September 2025 16:00:50 +0000 (0:00:01.203) 0:00:14.423 *** 2025-09-17 16:00:57.472435 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:00:57.472445 | orchestrator | 2025-09-17 16:00:57.472498 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-17 16:00:57.472510 | orchestrator | Wednesday 17 September 2025 16:00:51 +0000 (0:00:00.784) 0:00:15.207 *** 2025-09-17 16:00:57.472523 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:00:57.472535 | orchestrator | 2025-09-17 16:00:57.472547 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-17 16:00:57.472586 | orchestrator | Wednesday 17 September 2025 16:00:51 +0000 (0:00:00.159) 0:00:15.367 *** 2025-09-17 16:00:57.472599 | orchestrator | changed: [testbed-manager] 2025-09-17 16:00:57.472611 | orchestrator | 2025-09-17 16:00:57.472623 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-17 16:00:57.472636 | orchestrator | Wednesday 17 September 2025 16:00:52 +0000 (0:00:00.951) 0:00:16.318 *** 2025-09-17 16:00:57.472648 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-17 16:00:57.472661 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-17 16:00:57.472675 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-17 16:00:57.472688 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-17 16:00:57.472699 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-17 16:00:57.472712 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-17 16:00:57.472724 | orchestrator | 2025-09-17 16:00:57.472736 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-17 16:00:57.472748 | orchestrator | Wednesday 17 September 2025 16:00:54 +0000 (0:00:02.135) 0:00:18.454 *** 2025-09-17 16:00:57.472761 | orchestrator | ok: [testbed-manager] 2025-09-17 16:00:57.472773 | orchestrator | 2025-09-17 16:00:57.472786 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-17 16:00:57.472799 | orchestrator | Wednesday 17 September 2025 16:00:55 +0000 (0:00:01.312) 0:00:19.767 *** 2025-09-17 16:00:57.472811 | orchestrator | changed: [testbed-manager] 2025-09-17 16:00:57.472823 | orchestrator | 2025-09-17 16:00:57.472835 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:00:57.472848 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 16:00:57.472860 | orchestrator | 2025-09-17 16:00:57.472873 | orchestrator | 2025-09-17 16:00:57.472885 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:00:57.472898 | orchestrator | Wednesday 17 September 2025 16:00:57 +0000 (0:00:01.376) 0:00:21.144 *** 2025-09-17 16:00:57.472909 | orchestrator | =============================================================================== 2025-09-17 16:00:57.472919 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.51s 2025-09-17 16:00:57.472930 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.14s 2025-09-17 16:00:57.472941 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2025-09-17 16:00:57.472951 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.31s 2025-09-17 16:00:57.472979 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.23s 2025-09-17 16:00:57.472990 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2025-09-17 16:00:57.473001 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.18s 2025-09-17 16:00:57.473011 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.95s 2025-09-17 16:00:57.473022 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.84s 2025-09-17 16:00:57.473032 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.78s 2025-09-17 16:00:57.473043 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-17 16:00:57.473054 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-17 16:00:57.708797 | orchestrator | 2025-09-17 16:00:57.712159 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Sep 17 16:00:57 UTC 2025 2025-09-17 16:00:57.712189 | orchestrator | 2025-09-17 16:00:59.477299 | orchestrator | 2025-09-17 16:00:59 | INFO  | Collection nutshell is prepared for execution 2025-09-17 16:00:59.477451 | orchestrator | 2025-09-17 16:00:59 | INFO  | D [0] - dotfiles 2025-09-17 16:01:09.552395 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [0] - homer 2025-09-17 16:01:09.552525 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [0] - netdata 2025-09-17 16:01:09.552542 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [0] - openstackclient 2025-09-17 16:01:09.552554 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [0] - phpmyadmin 2025-09-17 16:01:09.552565 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [0] - common 2025-09-17 16:01:09.555703 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [1] -- loadbalancer 2025-09-17 16:01:09.555780 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [2] --- opensearch 2025-09-17 16:01:09.555805 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [2] --- mariadb-ng 2025-09-17 16:01:09.556010 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [3] ---- horizon 2025-09-17 16:01:09.556259 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [3] ---- keystone 2025-09-17 16:01:09.556614 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [4] ----- neutron 2025-09-17 16:01:09.556909 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ wait-for-nova 2025-09-17 16:01:09.557337 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [5] ------ octavia 2025-09-17 16:01:09.558608 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- barbican 2025-09-17 16:01:09.559188 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- designate 2025-09-17 16:01:09.559616 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- ironic 2025-09-17 16:01:09.559636 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- placement 2025-09-17 16:01:09.559648 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- magnum 2025-09-17 16:01:09.560706 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [1] -- openvswitch 2025-09-17 16:01:09.560899 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [2] --- ovn 2025-09-17 16:01:09.561303 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [1] -- memcached 2025-09-17 16:01:09.561594 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [1] -- redis 2025-09-17 16:01:09.561804 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [1] -- rabbitmq-ng 2025-09-17 16:01:09.562288 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [0] - kubernetes 2025-09-17 16:01:09.565091 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [1] -- kubeconfig 2025-09-17 16:01:09.565375 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [1] -- copy-kubeconfig 2025-09-17 16:01:09.565698 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [0] - ceph 2025-09-17 16:01:09.568251 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [1] -- ceph-pools 2025-09-17 16:01:09.568324 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [2] --- copy-ceph-keys 2025-09-17 16:01:09.568341 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [3] ---- cephclient 2025-09-17 16:01:09.568526 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-17 16:01:09.568725 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [4] ----- wait-for-keystone 2025-09-17 16:01:09.568814 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-17 16:01:09.569168 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ glance 2025-09-17 16:01:09.569189 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ cinder 2025-09-17 16:01:09.569385 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ nova 2025-09-17 16:01:09.569596 | orchestrator | 2025-09-17 16:01:09 | INFO  | A [4] ----- prometheus 2025-09-17 16:01:09.569882 | orchestrator | 2025-09-17 16:01:09 | INFO  | D [5] ------ grafana 2025-09-17 16:01:09.739510 | orchestrator | 2025-09-17 16:01:09 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-17 16:01:09.739594 | orchestrator | 2025-09-17 16:01:09 | INFO  | Tasks are running in the background 2025-09-17 16:01:12.383908 | orchestrator | 2025-09-17 16:01:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-17 16:01:14.486990 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:14.487718 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:14.488001 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:14.488489 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:14.488967 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:14.489503 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:14.489990 | orchestrator | 2025-09-17 16:01:14 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:14.490055 | orchestrator | 2025-09-17 16:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:17.525738 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:17.525913 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:17.525949 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:17.526779 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:17.527321 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:17.530476 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:17.530506 | orchestrator | 2025-09-17 16:01:17 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:17.530526 | orchestrator | 2025-09-17 16:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:20.636988 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:20.637082 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:20.637099 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:20.637110 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:20.637121 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:20.637132 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:20.637143 | orchestrator | 2025-09-17 16:01:20 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:20.637154 | orchestrator | 2025-09-17 16:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:23.922604 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:23.922706 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:23.922722 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:23.922733 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:23.922744 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:23.922754 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:23.922765 | orchestrator | 2025-09-17 16:01:23 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:23.922775 | orchestrator | 2025-09-17 16:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:26.923608 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:26.925002 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:26.927857 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:26.928331 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:26.929137 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:26.930630 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:26.931290 | orchestrator | 2025-09-17 16:01:26 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:26.931556 | orchestrator | 2025-09-17 16:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:30.097719 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:30.101072 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:30.101891 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state STARTED 2025-09-17 16:01:30.106673 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:30.107517 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:30.108682 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:30.109737 | orchestrator | 2025-09-17 16:01:30 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:30.109758 | orchestrator | 2025-09-17 16:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:33.194645 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:33.194922 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:33.196159 | orchestrator | 2025-09-17 16:01:33.196189 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-17 16:01:33.196202 | orchestrator | 2025-09-17 16:01:33.196213 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-17 16:01:33.196243 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:00.695) 0:00:00.695 *** 2025-09-17 16:01:33.196255 | orchestrator | changed: [testbed-manager] 2025-09-17 16:01:33.196267 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:01:33.196278 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:01:33.196288 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:01:33.196299 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:01:33.196309 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:01:33.196319 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:01:33.196330 | orchestrator | 2025-09-17 16:01:33.196341 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-17 16:01:33.196351 | orchestrator | Wednesday 17 September 2025 16:01:25 +0000 (0:00:04.238) 0:00:04.934 *** 2025-09-17 16:01:33.196362 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 16:01:33.196374 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 16:01:33.196384 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 16:01:33.196395 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 16:01:33.196405 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 16:01:33.196416 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 16:01:33.196426 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-17 16:01:33.196437 | orchestrator | 2025-09-17 16:01:33.196488 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-17 16:01:33.196502 | orchestrator | Wednesday 17 September 2025 16:01:26 +0000 (0:00:01.288) 0:00:06.223 *** 2025-09-17 16:01:33.196518 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:26.149859', 'end': '2025-09-17 16:01:26.154411', 'delta': '0:00:00.004552', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196540 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:25.828462', 'end': '2025-09-17 16:01:25.837552', 'delta': '0:00:00.009090', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196556 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:25.704936', 'end': '2025-09-17 16:01:25.712411', 'delta': '0:00:00.007475', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196597 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:25.831872', 'end': '2025-09-17 16:01:25.840727', 'delta': '0:00:00.008855', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196610 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:26.021780', 'end': '2025-09-17 16:01:26.030666', 'delta': '0:00:00.008886', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196621 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:26.078452', 'end': '2025-09-17 16:01:26.088753', 'delta': '0:00:00.010301', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196632 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-17 16:01:26.093626', 'end': '2025-09-17 16:01:26.103011', 'delta': '0:00:00.009385', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-17 16:01:33.196643 | orchestrator | 2025-09-17 16:01:33.196655 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-17 16:01:33.196666 | orchestrator | Wednesday 17 September 2025 16:01:28 +0000 (0:00:02.269) 0:00:08.492 *** 2025-09-17 16:01:33.196676 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 16:01:33.196692 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-17 16:01:33.196703 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 16:01:33.196713 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 16:01:33.196734 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 16:01:33.196745 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 16:01:33.196756 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 16:01:33.196767 | orchestrator | 2025-09-17 16:01:33.196777 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-17 16:01:33.196788 | orchestrator | Wednesday 17 September 2025 16:01:29 +0000 (0:00:01.185) 0:00:09.678 *** 2025-09-17 16:01:33.196798 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-17 16:01:33.196809 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-17 16:01:33.196916 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-17 16:01:33.196933 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-17 16:01:33.196944 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-17 16:01:33.196954 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-17 16:01:33.196965 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-17 16:01:33.196975 | orchestrator | 2025-09-17 16:01:33.196986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:01:33.197006 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197018 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197029 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197047 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197066 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197084 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197102 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:01:33.197120 | orchestrator | 2025-09-17 16:01:33.197138 | orchestrator | 2025-09-17 16:01:33.197155 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:01:33.197173 | orchestrator | Wednesday 17 September 2025 16:01:32 +0000 (0:00:02.670) 0:00:12.351 *** 2025-09-17 16:01:33.197191 | orchestrator | =============================================================================== 2025-09-17 16:01:33.197209 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.24s 2025-09-17 16:01:33.197227 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.67s 2025-09-17 16:01:33.197244 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.27s 2025-09-17 16:01:33.197262 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.29s 2025-09-17 16:01:33.197280 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.19s 2025-09-17 16:01:33.197298 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task a84077a8-6120-4e67-a398-bbf8ba4a415a is in state SUCCESS 2025-09-17 16:01:33.197318 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:33.197783 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:33.199222 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:33.199257 | orchestrator | 2025-09-17 16:01:33 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:33.199269 | orchestrator | 2025-09-17 16:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:36.318216 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:36.344629 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:36.345515 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:36.347528 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:36.348963 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:36.350084 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:36.351197 | orchestrator | 2025-09-17 16:01:36 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:36.351220 | orchestrator | 2025-09-17 16:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:39.432025 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:39.432129 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:39.432155 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:39.432174 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:39.432192 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:39.432210 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:39.432229 | orchestrator | 2025-09-17 16:01:39 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:39.432249 | orchestrator | 2025-09-17 16:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:42.423295 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:42.423375 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:42.423790 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:42.424693 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:42.425875 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:42.426932 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:42.427782 | orchestrator | 2025-09-17 16:01:42 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:42.427802 | orchestrator | 2025-09-17 16:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:45.600863 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:45.601059 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:45.601673 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:45.602323 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:45.602915 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:45.603390 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:45.604132 | orchestrator | 2025-09-17 16:01:45 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:45.604347 | orchestrator | 2025-09-17 16:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:48.689809 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:48.689865 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:48.689872 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:48.689878 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:48.689883 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:48.689888 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:48.689893 | orchestrator | 2025-09-17 16:01:48 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:48.689898 | orchestrator | 2025-09-17 16:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:51.779114 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:51.780619 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:51.781523 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:51.782651 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:51.784007 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:51.785101 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state STARTED 2025-09-17 16:01:51.786883 | orchestrator | 2025-09-17 16:01:51 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:51.786960 | orchestrator | 2025-09-17 16:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:55.027651 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:55.027741 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:55.027756 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:55.027767 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:55.027778 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:55.027789 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task 32c90b7a-bb39-46f4-9ef1-ab7ba00830ee is in state SUCCESS 2025-09-17 16:01:55.027822 | orchestrator | 2025-09-17 16:01:54 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:55.027834 | orchestrator | 2025-09-17 16:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:01:58.080664 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:01:58.080723 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:01:58.080731 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:01:58.080737 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:01:58.080744 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:01:58.080750 | orchestrator | 2025-09-17 16:01:57 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:01:58.080756 | orchestrator | 2025-09-17 16:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:01.098938 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:02:01.099048 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:01.099065 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:01.103262 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:01.103312 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:01.103974 | orchestrator | 2025-09-17 16:02:01 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:01.104002 | orchestrator | 2025-09-17 16:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:04.157341 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:02:04.157851 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:04.163360 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:04.165188 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:04.167015 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:04.170588 | orchestrator | 2025-09-17 16:02:04 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:04.170651 | orchestrator | 2025-09-17 16:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:07.309903 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state STARTED 2025-09-17 16:02:07.309990 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:07.310005 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:07.310066 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:07.310078 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:07.310089 | orchestrator | 2025-09-17 16:02:07 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:07.310121 | orchestrator | 2025-09-17 16:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:10.274879 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task e978dc46-fbb6-4474-872f-de17ebd681f4 is in state SUCCESS 2025-09-17 16:02:10.275975 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:10.277634 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:10.279779 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:10.281340 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:10.282997 | orchestrator | 2025-09-17 16:02:10 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:10.284039 | orchestrator | 2025-09-17 16:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:13.322368 | orchestrator | 2025-09-17 16:02:13 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:13.324098 | orchestrator | 2025-09-17 16:02:13 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:13.325722 | orchestrator | 2025-09-17 16:02:13 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:13.329824 | orchestrator | 2025-09-17 16:02:13 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:13.331161 | orchestrator | 2025-09-17 16:02:13 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:13.331192 | orchestrator | 2025-09-17 16:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:16.410212 | orchestrator | 2025-09-17 16:02:16 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:16.410317 | orchestrator | 2025-09-17 16:02:16 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:16.412416 | orchestrator | 2025-09-17 16:02:16 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:16.413888 | orchestrator | 2025-09-17 16:02:16 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:16.414361 | orchestrator | 2025-09-17 16:02:16 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:16.414425 | orchestrator | 2025-09-17 16:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:19.468405 | orchestrator | 2025-09-17 16:02:19 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:19.469752 | orchestrator | 2025-09-17 16:02:19 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:19.470905 | orchestrator | 2025-09-17 16:02:19 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:19.472460 | orchestrator | 2025-09-17 16:02:19 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:19.474178 | orchestrator | 2025-09-17 16:02:19 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:19.474202 | orchestrator | 2025-09-17 16:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:22.570614 | orchestrator | 2025-09-17 16:02:22 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:22.575445 | orchestrator | 2025-09-17 16:02:22 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state STARTED 2025-09-17 16:02:22.578707 | orchestrator | 2025-09-17 16:02:22 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:22.581624 | orchestrator | 2025-09-17 16:02:22 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:22.584643 | orchestrator | 2025-09-17 16:02:22 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:22.584898 | orchestrator | 2025-09-17 16:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:25.632332 | orchestrator | 2025-09-17 16:02:25 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:25.632721 | orchestrator | 2025-09-17 16:02:25 | INFO  | Task cdd1cf7f-17e2-467c-8706-014ce73797d2 is in state SUCCESS 2025-09-17 16:02:25.634399 | orchestrator | 2025-09-17 16:02:25.634432 | orchestrator | 2025-09-17 16:02:25.634438 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-17 16:02:25.634443 | orchestrator | 2025-09-17 16:02:25.634447 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-17 16:02:25.634451 | orchestrator | Wednesday 17 September 2025 16:01:19 +0000 (0:00:00.286) 0:00:00.286 *** 2025-09-17 16:02:25.634455 | orchestrator | ok: [testbed-manager] => { 2025-09-17 16:02:25.634460 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-17 16:02:25.634466 | orchestrator | } 2025-09-17 16:02:25.634470 | orchestrator | 2025-09-17 16:02:25.634474 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-17 16:02:25.634481 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:00.459) 0:00:00.745 *** 2025-09-17 16:02:25.634485 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.634489 | orchestrator | 2025-09-17 16:02:25.634493 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-17 16:02:25.634497 | orchestrator | Wednesday 17 September 2025 16:01:21 +0000 (0:00:01.130) 0:00:01.887 *** 2025-09-17 16:02:25.634501 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-17 16:02:25.634504 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-17 16:02:25.634508 | orchestrator | 2025-09-17 16:02:25.634512 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-17 16:02:25.634516 | orchestrator | Wednesday 17 September 2025 16:01:22 +0000 (0:00:01.534) 0:00:03.422 *** 2025-09-17 16:02:25.634520 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634523 | orchestrator | 2025-09-17 16:02:25.634527 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-17 16:02:25.634531 | orchestrator | Wednesday 17 September 2025 16:01:24 +0000 (0:00:01.724) 0:00:05.147 *** 2025-09-17 16:02:25.634535 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634538 | orchestrator | 2025-09-17 16:02:25.634542 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-17 16:02:25.634546 | orchestrator | Wednesday 17 September 2025 16:01:26 +0000 (0:00:01.589) 0:00:06.736 *** 2025-09-17 16:02:25.634550 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-17 16:02:25.634554 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.634558 | orchestrator | 2025-09-17 16:02:25.634561 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-17 16:02:25.634565 | orchestrator | Wednesday 17 September 2025 16:01:51 +0000 (0:00:24.928) 0:00:31.665 *** 2025-09-17 16:02:25.634569 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634573 | orchestrator | 2025-09-17 16:02:25.634577 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:02:25.634581 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.634594 | orchestrator | 2025-09-17 16:02:25.634598 | orchestrator | 2025-09-17 16:02:25.634602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:02:25.634606 | orchestrator | Wednesday 17 September 2025 16:01:53 +0000 (0:00:02.750) 0:00:34.415 *** 2025-09-17 16:02:25.634610 | orchestrator | =============================================================================== 2025-09-17 16:02:25.634613 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.93s 2025-09-17 16:02:25.634617 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.75s 2025-09-17 16:02:25.634621 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.72s 2025-09-17 16:02:25.634625 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.59s 2025-09-17 16:02:25.634628 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.53s 2025-09-17 16:02:25.634632 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.14s 2025-09-17 16:02:25.634636 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.46s 2025-09-17 16:02:25.634662 | orchestrator | 2025-09-17 16:02:25.634666 | orchestrator | 2025-09-17 16:02:25.634670 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-17 16:02:25.634674 | orchestrator | 2025-09-17 16:02:25.634678 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-17 16:02:25.634681 | orchestrator | Wednesday 17 September 2025 16:01:22 +0000 (0:00:00.318) 0:00:00.318 *** 2025-09-17 16:02:25.634685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-17 16:02:25.634690 | orchestrator | 2025-09-17 16:02:25.634694 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-17 16:02:25.634698 | orchestrator | Wednesday 17 September 2025 16:01:22 +0000 (0:00:00.171) 0:00:00.490 *** 2025-09-17 16:02:25.634702 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-17 16:02:25.634705 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-17 16:02:25.634709 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-17 16:02:25.634713 | orchestrator | 2025-09-17 16:02:25.634717 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-17 16:02:25.634720 | orchestrator | Wednesday 17 September 2025 16:01:24 +0000 (0:00:02.292) 0:00:02.783 *** 2025-09-17 16:02:25.634724 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634728 | orchestrator | 2025-09-17 16:02:25.634731 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-17 16:02:25.634735 | orchestrator | Wednesday 17 September 2025 16:01:26 +0000 (0:00:01.893) 0:00:04.676 *** 2025-09-17 16:02:25.634745 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-17 16:02:25.634749 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.634753 | orchestrator | 2025-09-17 16:02:25.634757 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-17 16:02:25.634761 | orchestrator | Wednesday 17 September 2025 16:02:00 +0000 (0:00:33.363) 0:00:38.040 *** 2025-09-17 16:02:25.634764 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634768 | orchestrator | 2025-09-17 16:02:25.634772 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-17 16:02:25.634775 | orchestrator | Wednesday 17 September 2025 16:02:01 +0000 (0:00:00.887) 0:00:38.927 *** 2025-09-17 16:02:25.634779 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.634783 | orchestrator | 2025-09-17 16:02:25.634786 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-17 16:02:25.634792 | orchestrator | Wednesday 17 September 2025 16:02:01 +0000 (0:00:00.668) 0:00:39.595 *** 2025-09-17 16:02:25.634796 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634799 | orchestrator | 2025-09-17 16:02:25.634803 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-17 16:02:25.634809 | orchestrator | Wednesday 17 September 2025 16:02:04 +0000 (0:00:02.538) 0:00:42.134 *** 2025-09-17 16:02:25.634813 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634816 | orchestrator | 2025-09-17 16:02:25.634820 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-17 16:02:25.634824 | orchestrator | Wednesday 17 September 2025 16:02:06 +0000 (0:00:01.788) 0:00:43.923 *** 2025-09-17 16:02:25.634827 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.634831 | orchestrator | 2025-09-17 16:02:25.634835 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-17 16:02:25.634839 | orchestrator | Wednesday 17 September 2025 16:02:07 +0000 (0:00:01.751) 0:00:45.674 *** 2025-09-17 16:02:25.634842 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.634846 | orchestrator | 2025-09-17 16:02:25.634850 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:02:25.634853 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.634857 | orchestrator | 2025-09-17 16:02:25.634861 | orchestrator | 2025-09-17 16:02:25.634864 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:02:25.634868 | orchestrator | Wednesday 17 September 2025 16:02:08 +0000 (0:00:00.475) 0:00:46.150 *** 2025-09-17 16:02:25.634872 | orchestrator | =============================================================================== 2025-09-17 16:02:25.634875 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.36s 2025-09-17 16:02:25.634879 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.54s 2025-09-17 16:02:25.634883 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.29s 2025-09-17 16:02:25.634886 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.89s 2025-09-17 16:02:25.634890 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.79s 2025-09-17 16:02:25.634894 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.75s 2025-09-17 16:02:25.634897 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.89s 2025-09-17 16:02:25.634901 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2025-09-17 16:02:25.634905 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2025-09-17 16:02:25.634908 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.17s 2025-09-17 16:02:25.634912 | orchestrator | 2025-09-17 16:02:25.634915 | orchestrator | 2025-09-17 16:02:25.634919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:02:25.634923 | orchestrator | 2025-09-17 16:02:25.634926 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:02:25.634930 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:00.746) 0:00:00.746 *** 2025-09-17 16:02:25.634934 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-17 16:02:25.634937 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-17 16:02:25.634941 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-17 16:02:25.634945 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-17 16:02:25.634948 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-17 16:02:25.634952 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-17 16:02:25.634956 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-17 16:02:25.634959 | orchestrator | 2025-09-17 16:02:25.634963 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-17 16:02:25.634967 | orchestrator | 2025-09-17 16:02:25.634970 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-17 16:02:25.634976 | orchestrator | Wednesday 17 September 2025 16:01:23 +0000 (0:00:02.607) 0:00:03.353 *** 2025-09-17 16:02:25.634985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:02:25.634991 | orchestrator | 2025-09-17 16:02:25.634994 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-17 16:02:25.634998 | orchestrator | Wednesday 17 September 2025 16:01:24 +0000 (0:00:01.394) 0:00:04.747 *** 2025-09-17 16:02:25.635003 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:02:25.635007 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:02:25.635011 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:02:25.635015 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:02:25.635019 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:02:25.635026 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:02:25.635030 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.635034 | orchestrator | 2025-09-17 16:02:25.635038 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-17 16:02:25.635043 | orchestrator | Wednesday 17 September 2025 16:01:26 +0000 (0:00:02.123) 0:00:06.871 *** 2025-09-17 16:02:25.635047 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:02:25.635051 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:02:25.635055 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:02:25.635059 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:02:25.635063 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.635067 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:02:25.635071 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:02:25.635075 | orchestrator | 2025-09-17 16:02:25.635079 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-17 16:02:25.635085 | orchestrator | Wednesday 17 September 2025 16:01:30 +0000 (0:00:03.680) 0:00:10.552 *** 2025-09-17 16:02:25.635090 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635094 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:02:25.635098 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:02:25.635102 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:02:25.635106 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:02:25.635110 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:02:25.635114 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:02:25.635118 | orchestrator | 2025-09-17 16:02:25.635123 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-17 16:02:25.635127 | orchestrator | Wednesday 17 September 2025 16:01:32 +0000 (0:00:01.940) 0:00:12.492 *** 2025-09-17 16:02:25.635131 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:02:25.635135 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:02:25.635140 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:02:25.635144 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:02:25.635148 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:02:25.635152 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:02:25.635156 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635161 | orchestrator | 2025-09-17 16:02:25.635165 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-17 16:02:25.635169 | orchestrator | Wednesday 17 September 2025 16:01:43 +0000 (0:00:11.014) 0:00:23.507 *** 2025-09-17 16:02:25.635173 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:02:25.635177 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:02:25.635181 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:02:25.635185 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:02:25.635189 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:02:25.635193 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:02:25.635197 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635201 | orchestrator | 2025-09-17 16:02:25.635206 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-17 16:02:25.635210 | orchestrator | Wednesday 17 September 2025 16:02:03 +0000 (0:00:19.837) 0:00:43.345 *** 2025-09-17 16:02:25.635216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:02:25.635221 | orchestrator | 2025-09-17 16:02:25.635225 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-17 16:02:25.635230 | orchestrator | Wednesday 17 September 2025 16:02:05 +0000 (0:00:01.988) 0:00:45.333 *** 2025-09-17 16:02:25.635234 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-17 16:02:25.635238 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-17 16:02:25.635243 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-17 16:02:25.635247 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-17 16:02:25.635251 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-17 16:02:25.635255 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-17 16:02:25.635260 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-17 16:02:25.635264 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-17 16:02:25.635268 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-17 16:02:25.635272 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-17 16:02:25.635276 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-17 16:02:25.635280 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-17 16:02:25.635285 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-17 16:02:25.635289 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-17 16:02:25.635293 | orchestrator | 2025-09-17 16:02:25.635297 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-17 16:02:25.635302 | orchestrator | Wednesday 17 September 2025 16:02:09 +0000 (0:00:04.334) 0:00:49.668 *** 2025-09-17 16:02:25.635306 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.635310 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:02:25.635314 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:02:25.635318 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:02:25.635322 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:02:25.635327 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:02:25.635331 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:02:25.635335 | orchestrator | 2025-09-17 16:02:25.635339 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-17 16:02:25.635344 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.965) 0:00:50.634 *** 2025-09-17 16:02:25.635348 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:02:25.635352 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:02:25.635357 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635360 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:02:25.635364 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:02:25.635368 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:02:25.635382 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:02:25.635386 | orchestrator | 2025-09-17 16:02:25.635390 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-17 16:02:25.635396 | orchestrator | Wednesday 17 September 2025 16:02:11 +0000 (0:00:01.469) 0:00:52.103 *** 2025-09-17 16:02:25.635400 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.635403 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:02:25.635407 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:02:25.635411 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:02:25.635415 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:02:25.635418 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:02:25.635422 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:02:25.635426 | orchestrator | 2025-09-17 16:02:25.635430 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-17 16:02:25.635433 | orchestrator | Wednesday 17 September 2025 16:02:13 +0000 (0:00:01.649) 0:00:53.753 *** 2025-09-17 16:02:25.635441 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:02:25.635445 | orchestrator | ok: [testbed-manager] 2025-09-17 16:02:25.635448 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:02:25.635452 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:02:25.635496 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:02:25.635501 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:02:25.635506 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:02:25.635510 | orchestrator | 2025-09-17 16:02:25.635514 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-17 16:02:25.635518 | orchestrator | Wednesday 17 September 2025 16:02:16 +0000 (0:00:02.889) 0:00:56.643 *** 2025-09-17 16:02:25.635522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-17 16:02:25.635526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:02:25.635530 | orchestrator | 2025-09-17 16:02:25.635534 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-17 16:02:25.635538 | orchestrator | Wednesday 17 September 2025 16:02:18 +0000 (0:00:01.959) 0:00:58.602 *** 2025-09-17 16:02:25.635541 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635545 | orchestrator | 2025-09-17 16:02:25.635549 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-17 16:02:25.635553 | orchestrator | Wednesday 17 September 2025 16:02:20 +0000 (0:00:02.454) 0:01:01.057 *** 2025-09-17 16:02:25.635556 | orchestrator | changed: [testbed-manager] 2025-09-17 16:02:25.635560 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:02:25.635564 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:02:25.635568 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:02:25.635571 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:02:25.635575 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:02:25.635579 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:02:25.635583 | orchestrator | 2025-09-17 16:02:25.635586 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:02:25.635590 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635594 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635598 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635602 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635606 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635609 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635613 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:02:25.635617 | orchestrator | 2025-09-17 16:02:25.635621 | orchestrator | 2025-09-17 16:02:25.635624 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:02:25.635628 | orchestrator | Wednesday 17 September 2025 16:02:23 +0000 (0:00:03.005) 0:01:04.063 *** 2025-09-17 16:02:25.635632 | orchestrator | =============================================================================== 2025-09-17 16:02:25.635636 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.84s 2025-09-17 16:02:25.635639 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.02s 2025-09-17 16:02:25.635646 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.33s 2025-09-17 16:02:25.635650 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.68s 2025-09-17 16:02:25.635654 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.01s 2025-09-17 16:02:25.635657 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.89s 2025-09-17 16:02:25.635661 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.61s 2025-09-17 16:02:25.635665 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.45s 2025-09-17 16:02:25.635668 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.12s 2025-09-17 16:02:25.635672 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.99s 2025-09-17 16:02:25.635676 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.96s 2025-09-17 16:02:25.635682 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.94s 2025-09-17 16:02:25.635686 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.65s 2025-09-17 16:02:25.635690 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.47s 2025-09-17 16:02:25.635694 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.39s 2025-09-17 16:02:25.635697 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.97s 2025-09-17 16:02:25.635701 | orchestrator | 2025-09-17 16:02:25 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:25.635705 | orchestrator | 2025-09-17 16:02:25 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:25.636514 | orchestrator | 2025-09-17 16:02:25 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:25.636753 | orchestrator | 2025-09-17 16:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:28.674710 | orchestrator | 2025-09-17 16:02:28 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state STARTED 2025-09-17 16:02:28.676170 | orchestrator | 2025-09-17 16:02:28 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:28.677441 | orchestrator | 2025-09-17 16:02:28 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:28.679255 | orchestrator | 2025-09-17 16:02:28 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:28.679278 | orchestrator | 2025-09-17 16:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:31.711719 | orchestrator | 2025-09-17 16:02:31 | INFO  | Task ce81724c-c2db-4623-9f87-e680d43809c5 is in state SUCCESS 2025-09-17 16:02:31.714901 | orchestrator | 2025-09-17 16:02:31 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:31.717546 | orchestrator | 2025-09-17 16:02:31 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:31.718355 | orchestrator | 2025-09-17 16:02:31 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:31.718806 | orchestrator | 2025-09-17 16:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:34.761803 | orchestrator | 2025-09-17 16:02:34 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:34.761934 | orchestrator | 2025-09-17 16:02:34 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:34.762947 | orchestrator | 2025-09-17 16:02:34 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:34.762971 | orchestrator | 2025-09-17 16:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:37.793015 | orchestrator | 2025-09-17 16:02:37 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:37.794619 | orchestrator | 2025-09-17 16:02:37 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:37.795851 | orchestrator | 2025-09-17 16:02:37 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:37.796182 | orchestrator | 2025-09-17 16:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:40.838955 | orchestrator | 2025-09-17 16:02:40 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:40.839451 | orchestrator | 2025-09-17 16:02:40 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:40.842446 | orchestrator | 2025-09-17 16:02:40 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:40.842478 | orchestrator | 2025-09-17 16:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:43.877988 | orchestrator | 2025-09-17 16:02:43 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:43.878899 | orchestrator | 2025-09-17 16:02:43 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:43.879540 | orchestrator | 2025-09-17 16:02:43 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:43.879566 | orchestrator | 2025-09-17 16:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:46.930239 | orchestrator | 2025-09-17 16:02:46 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:46.931610 | orchestrator | 2025-09-17 16:02:46 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:46.934161 | orchestrator | 2025-09-17 16:02:46 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:46.934835 | orchestrator | 2025-09-17 16:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:49.975451 | orchestrator | 2025-09-17 16:02:49 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:49.976993 | orchestrator | 2025-09-17 16:02:49 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:49.979045 | orchestrator | 2025-09-17 16:02:49 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:49.979072 | orchestrator | 2025-09-17 16:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:53.011299 | orchestrator | 2025-09-17 16:02:53 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:53.013072 | orchestrator | 2025-09-17 16:02:53 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:53.014342 | orchestrator | 2025-09-17 16:02:53 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:53.014440 | orchestrator | 2025-09-17 16:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:56.056513 | orchestrator | 2025-09-17 16:02:56 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:56.060945 | orchestrator | 2025-09-17 16:02:56 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:56.065103 | orchestrator | 2025-09-17 16:02:56 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:56.065440 | orchestrator | 2025-09-17 16:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:02:59.103707 | orchestrator | 2025-09-17 16:02:59 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:02:59.104596 | orchestrator | 2025-09-17 16:02:59 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:02:59.105101 | orchestrator | 2025-09-17 16:02:59 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:02:59.105124 | orchestrator | 2025-09-17 16:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:02.145695 | orchestrator | 2025-09-17 16:03:02 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:02.147751 | orchestrator | 2025-09-17 16:03:02 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:02.149412 | orchestrator | 2025-09-17 16:03:02 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:02.149443 | orchestrator | 2025-09-17 16:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:05.183955 | orchestrator | 2025-09-17 16:03:05 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:05.188037 | orchestrator | 2025-09-17 16:03:05 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:05.193529 | orchestrator | 2025-09-17 16:03:05 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:05.193564 | orchestrator | 2025-09-17 16:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:08.241420 | orchestrator | 2025-09-17 16:03:08 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:08.241937 | orchestrator | 2025-09-17 16:03:08 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:08.243554 | orchestrator | 2025-09-17 16:03:08 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:08.243593 | orchestrator | 2025-09-17 16:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:11.287027 | orchestrator | 2025-09-17 16:03:11 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:11.289116 | orchestrator | 2025-09-17 16:03:11 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:11.290069 | orchestrator | 2025-09-17 16:03:11 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:11.290100 | orchestrator | 2025-09-17 16:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:14.335527 | orchestrator | 2025-09-17 16:03:14 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:14.337474 | orchestrator | 2025-09-17 16:03:14 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:14.339282 | orchestrator | 2025-09-17 16:03:14 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:14.339443 | orchestrator | 2025-09-17 16:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:17.382342 | orchestrator | 2025-09-17 16:03:17 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:17.383362 | orchestrator | 2025-09-17 16:03:17 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:17.386453 | orchestrator | 2025-09-17 16:03:17 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:17.386528 | orchestrator | 2025-09-17 16:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:20.433509 | orchestrator | 2025-09-17 16:03:20 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:20.435865 | orchestrator | 2025-09-17 16:03:20 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:20.437866 | orchestrator | 2025-09-17 16:03:20 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:20.438334 | orchestrator | 2025-09-17 16:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:23.474610 | orchestrator | 2025-09-17 16:03:23 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:23.477960 | orchestrator | 2025-09-17 16:03:23 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:23.478563 | orchestrator | 2025-09-17 16:03:23 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:23.478980 | orchestrator | 2025-09-17 16:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:26.520767 | orchestrator | 2025-09-17 16:03:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:26.523533 | orchestrator | 2025-09-17 16:03:26 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:26.526114 | orchestrator | 2025-09-17 16:03:26 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:26.526146 | orchestrator | 2025-09-17 16:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:29.563360 | orchestrator | 2025-09-17 16:03:29 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:29.564495 | orchestrator | 2025-09-17 16:03:29 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:29.566369 | orchestrator | 2025-09-17 16:03:29 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:29.567398 | orchestrator | 2025-09-17 16:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:32.601975 | orchestrator | 2025-09-17 16:03:32 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:32.602966 | orchestrator | 2025-09-17 16:03:32 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:32.604025 | orchestrator | 2025-09-17 16:03:32 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:32.604293 | orchestrator | 2025-09-17 16:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:35.645083 | orchestrator | 2025-09-17 16:03:35 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:35.646322 | orchestrator | 2025-09-17 16:03:35 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:35.647986 | orchestrator | 2025-09-17 16:03:35 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:35.648125 | orchestrator | 2025-09-17 16:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:38.699357 | orchestrator | 2025-09-17 16:03:38 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:38.701988 | orchestrator | 2025-09-17 16:03:38 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:38.704022 | orchestrator | 2025-09-17 16:03:38 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:38.704309 | orchestrator | 2025-09-17 16:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:41.746735 | orchestrator | 2025-09-17 16:03:41 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:41.748615 | orchestrator | 2025-09-17 16:03:41 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:41.752293 | orchestrator | 2025-09-17 16:03:41 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:41.752316 | orchestrator | 2025-09-17 16:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:44.791602 | orchestrator | 2025-09-17 16:03:44 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:44.793413 | orchestrator | 2025-09-17 16:03:44 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:44.795375 | orchestrator | 2025-09-17 16:03:44 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:44.795747 | orchestrator | 2025-09-17 16:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:47.838808 | orchestrator | 2025-09-17 16:03:47 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:47.839931 | orchestrator | 2025-09-17 16:03:47 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:47.841297 | orchestrator | 2025-09-17 16:03:47 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:47.841332 | orchestrator | 2025-09-17 16:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:50.891018 | orchestrator | 2025-09-17 16:03:50 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:50.893476 | orchestrator | 2025-09-17 16:03:50 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:50.895608 | orchestrator | 2025-09-17 16:03:50 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:50.895638 | orchestrator | 2025-09-17 16:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:53.936434 | orchestrator | 2025-09-17 16:03:53 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:53.937633 | orchestrator | 2025-09-17 16:03:53 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:53.939837 | orchestrator | 2025-09-17 16:03:53 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:53.939882 | orchestrator | 2025-09-17 16:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:03:56.978175 | orchestrator | 2025-09-17 16:03:56 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:03:56.980430 | orchestrator | 2025-09-17 16:03:56 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:03:56.980567 | orchestrator | 2025-09-17 16:03:56 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:03:56.980584 | orchestrator | 2025-09-17 16:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:00.016922 | orchestrator | 2025-09-17 16:04:00 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:00.019104 | orchestrator | 2025-09-17 16:04:00 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:04:00.021340 | orchestrator | 2025-09-17 16:04:00 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:00.021534 | orchestrator | 2025-09-17 16:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:03.067540 | orchestrator | 2025-09-17 16:04:03 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:03.069067 | orchestrator | 2025-09-17 16:04:03 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state STARTED 2025-09-17 16:04:03.071114 | orchestrator | 2025-09-17 16:04:03 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:03.071375 | orchestrator | 2025-09-17 16:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:06.112804 | orchestrator | 2025-09-17 16:04:06 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:06.117091 | orchestrator | 2025-09-17 16:04:06 | INFO  | Task 5d6c1903-2a43-4c7c-ba4b-fc079984df7a is in state SUCCESS 2025-09-17 16:04:06.119054 | orchestrator | 2025-09-17 16:04:06.119100 | orchestrator | 2025-09-17 16:04:06.119113 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-17 16:04:06.119125 | orchestrator | 2025-09-17 16:04:06.119237 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-17 16:04:06.119252 | orchestrator | Wednesday 17 September 2025 16:01:36 +0000 (0:00:00.225) 0:00:00.225 *** 2025-09-17 16:04:06.119263 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:06.119275 | orchestrator | 2025-09-17 16:04:06.119286 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-17 16:04:06.119298 | orchestrator | Wednesday 17 September 2025 16:01:37 +0000 (0:00:01.294) 0:00:01.519 *** 2025-09-17 16:04:06.119309 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-17 16:04:06.119372 | orchestrator | 2025-09-17 16:04:06.119384 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-17 16:04:06.119395 | orchestrator | Wednesday 17 September 2025 16:01:38 +0000 (0:00:00.622) 0:00:02.141 *** 2025-09-17 16:04:06.119406 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.119463 | orchestrator | 2025-09-17 16:04:06.119476 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-17 16:04:06.119514 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:01.226) 0:00:03.367 *** 2025-09-17 16:04:06.119526 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-17 16:04:06.119538 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:06.119549 | orchestrator | 2025-09-17 16:04:06.119560 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-17 16:04:06.119579 | orchestrator | Wednesday 17 September 2025 16:02:27 +0000 (0:00:47.635) 0:00:51.003 *** 2025-09-17 16:04:06.119590 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.119601 | orchestrator | 2025-09-17 16:04:06.119612 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:04:06.119623 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:06.119637 | orchestrator | 2025-09-17 16:04:06.119650 | orchestrator | 2025-09-17 16:04:06.119662 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:04:06.119674 | orchestrator | Wednesday 17 September 2025 16:02:31 +0000 (0:00:03.952) 0:00:54.955 *** 2025-09-17 16:04:06.119686 | orchestrator | =============================================================================== 2025-09-17 16:04:06.119699 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.64s 2025-09-17 16:04:06.119712 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.95s 2025-09-17 16:04:06.119725 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.29s 2025-09-17 16:04:06.119737 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.23s 2025-09-17 16:04:06.119750 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2025-09-17 16:04:06.119761 | orchestrator | 2025-09-17 16:04:06.119774 | orchestrator | 2025-09-17 16:04:06.119786 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-17 16:04:06.119799 | orchestrator | 2025-09-17 16:04:06.119811 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-17 16:04:06.119824 | orchestrator | Wednesday 17 September 2025 16:01:13 +0000 (0:00:00.209) 0:00:00.209 *** 2025-09-17 16:04:06.119858 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:04:06.119872 | orchestrator | 2025-09-17 16:04:06.119884 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-17 16:04:06.119920 | orchestrator | Wednesday 17 September 2025 16:01:14 +0000 (0:00:01.095) 0:00:01.305 *** 2025-09-17 16:04:06.119934 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.119946 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.119958 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.119971 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.119984 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.119996 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120007 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120018 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120029 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.120040 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120050 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120061 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.120072 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120083 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120093 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-17 16:04:06.120104 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120129 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120140 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120151 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-17 16:04:06.120162 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120173 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-17 16:04:06.120201 | orchestrator | 2025-09-17 16:04:06.120213 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-17 16:04:06.120223 | orchestrator | Wednesday 17 September 2025 16:01:18 +0000 (0:00:03.853) 0:00:05.158 *** 2025-09-17 16:04:06.120234 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:04:06.120247 | orchestrator | 2025-09-17 16:04:06.120258 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-17 16:04:06.120268 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:01.338) 0:00:06.497 *** 2025-09-17 16:04:06.120288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120337 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120394 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.120451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120552 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120564 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.120619 | orchestrator | 2025-09-17 16:04:06.120631 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-17 16:04:06.120647 | orchestrator | Wednesday 17 September 2025 16:01:25 +0000 (0:00:05.400) 0:00:11.898 *** 2025-09-17 16:04:06.120660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120694 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120705 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:04:06.120716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120811 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:06.120823 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:06.120838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120873 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:06.120884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120924 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:06.120941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.120956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.120979 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:06.120990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121024 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:06.121034 | orchestrator | 2025-09-17 16:04:06.121045 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-17 16:04:06.121056 | orchestrator | Wednesday 17 September 2025 16:01:26 +0000 (0:00:01.144) 0:00:13.042 *** 2025-09-17 16:04:06.121067 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121103 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121114 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:04:06.121129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121163 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:06.121174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121817 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:06.121828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121870 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:06.121881 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:06.121892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121935 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:06.121950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-17 16:04:06.121961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.121984 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:06.121995 | orchestrator | 2025-09-17 16:04:06.122005 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-17 16:04:06.122068 | orchestrator | Wednesday 17 September 2025 16:01:29 +0000 (0:00:03.035) 0:00:16.078 *** 2025-09-17 16:04:06.122083 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:04:06.122101 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:06.122112 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:06.122122 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:06.122133 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:06.122144 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:06.122154 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:06.122165 | orchestrator | 2025-09-17 16:04:06.122199 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-17 16:04:06.122210 | orchestrator | Wednesday 17 September 2025 16:01:30 +0000 (0:00:00.881) 0:00:16.959 *** 2025-09-17 16:04:06.122221 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:04:06.122232 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:06.122242 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:06.122253 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:06.122263 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:06.122274 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:06.122284 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:06.122295 | orchestrator | 2025-09-17 16:04:06.122306 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-17 16:04:06.122316 | orchestrator | Wednesday 17 September 2025 16:01:32 +0000 (0:00:01.638) 0:00:18.598 *** 2025-09-17 16:04:06.122342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122366 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122407 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.122483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122597 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.122637 | orchestrator | 2025-09-17 16:04:06.122648 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-17 16:04:06.122658 | orchestrator | Wednesday 17 September 2025 16:01:38 +0000 (0:00:06.349) 0:00:24.947 *** 2025-09-17 16:04:06.122669 | orchestrator | [WARNING]: Skipped 2025-09-17 16:04:06.122680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-17 16:04:06.122691 | orchestrator | to this access issue: 2025-09-17 16:04:06.122702 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-17 16:04:06.122713 | orchestrator | directory 2025-09-17 16:04:06.122723 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:04:06.122734 | orchestrator | 2025-09-17 16:04:06.122745 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-17 16:04:06.122755 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:01.345) 0:00:26.293 *** 2025-09-17 16:04:06.122766 | orchestrator | [WARNING]: Skipped 2025-09-17 16:04:06.122776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-17 16:04:06.122787 | orchestrator | to this access issue: 2025-09-17 16:04:06.122797 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-17 16:04:06.122808 | orchestrator | directory 2025-09-17 16:04:06.122819 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:04:06.122830 | orchestrator | 2025-09-17 16:04:06.122840 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-17 16:04:06.122851 | orchestrator | Wednesday 17 September 2025 16:01:41 +0000 (0:00:01.219) 0:00:27.513 *** 2025-09-17 16:04:06.122862 | orchestrator | [WARNING]: Skipped 2025-09-17 16:04:06.122872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-17 16:04:06.122883 | orchestrator | to this access issue: 2025-09-17 16:04:06.122893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-17 16:04:06.122904 | orchestrator | directory 2025-09-17 16:04:06.122915 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:04:06.122925 | orchestrator | 2025-09-17 16:04:06.122941 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-17 16:04:06.122952 | orchestrator | Wednesday 17 September 2025 16:01:41 +0000 (0:00:00.806) 0:00:28.319 *** 2025-09-17 16:04:06.122963 | orchestrator | [WARNING]: Skipped 2025-09-17 16:04:06.122973 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-17 16:04:06.122984 | orchestrator | to this access issue: 2025-09-17 16:04:06.122994 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-17 16:04:06.123005 | orchestrator | directory 2025-09-17 16:04:06.123016 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:04:06.123026 | orchestrator | 2025-09-17 16:04:06.123037 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-17 16:04:06.123048 | orchestrator | Wednesday 17 September 2025 16:01:42 +0000 (0:00:00.815) 0:00:29.135 *** 2025-09-17 16:04:06.123058 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.123074 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.123085 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.123096 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.123106 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.123117 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.123127 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.123137 | orchestrator | 2025-09-17 16:04:06.123148 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-17 16:04:06.123159 | orchestrator | Wednesday 17 September 2025 16:01:46 +0000 (0:00:03.913) 0:00:33.049 *** 2025-09-17 16:04:06.123169 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123200 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123212 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123233 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123244 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123255 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-17 16:04:06.123266 | orchestrator | 2025-09-17 16:04:06.123277 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-17 16:04:06.123287 | orchestrator | Wednesday 17 September 2025 16:01:51 +0000 (0:00:05.118) 0:00:38.168 *** 2025-09-17 16:04:06.123298 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.123309 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.123319 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.123330 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.123341 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.123351 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.123362 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.123372 | orchestrator | 2025-09-17 16:04:06.123383 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-17 16:04:06.123394 | orchestrator | Wednesday 17 September 2025 16:01:55 +0000 (0:00:03.302) 0:00:41.470 *** 2025-09-17 16:04:06.123405 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123417 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123465 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123495 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123529 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123591 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123614 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:04:06.123642 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123664 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123691 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.123703 | orchestrator | 2025-09-17 16:04:06.123714 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-17 16:04:06.123725 | orchestrator | Wednesday 17 September 2025 16:01:57 +0000 (0:00:02.687) 0:00:44.157 *** 2025-09-17 16:04:06.123736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123746 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123757 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123768 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123778 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123789 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123800 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-17 16:04:06.123811 | orchestrator | 2025-09-17 16:04:06.123822 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-17 16:04:06.123833 | orchestrator | Wednesday 17 September 2025 16:02:00 +0000 (0:00:02.661) 0:00:46.819 *** 2025-09-17 16:04:06.123843 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123854 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123865 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123876 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123897 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123913 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-17 16:04:06.123924 | orchestrator | 2025-09-17 16:04:06.123935 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-17 16:04:06.123945 | orchestrator | Wednesday 17 September 2025 16:02:02 +0000 (0:00:02.614) 0:00:49.434 *** 2025-09-17 16:04:06.123956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123986 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.123998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.124013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.124054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.124082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-17 16:04:06.124094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124136 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124254 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:04:06.124266 | orchestrator | 2025-09-17 16:04:06.124283 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-17 16:04:06.124294 | orchestrator | Wednesday 17 September 2025 16:02:06 +0000 (0:00:03.703) 0:00:53.137 *** 2025-09-17 16:04:06.124305 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.124315 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.124326 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.124337 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.124347 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.124358 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.124368 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.124379 | orchestrator | 2025-09-17 16:04:06.124389 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-17 16:04:06.124400 | orchestrator | Wednesday 17 September 2025 16:02:08 +0000 (0:00:02.187) 0:00:55.325 *** 2025-09-17 16:04:06.124410 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.124421 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.124431 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.124442 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.124453 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.124463 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.124474 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.124484 | orchestrator | 2025-09-17 16:04:06.124495 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124505 | orchestrator | Wednesday 17 September 2025 16:02:09 +0000 (0:00:01.045) 0:00:56.370 *** 2025-09-17 16:04:06.124516 | orchestrator | 2025-09-17 16:04:06.124526 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124537 | orchestrator | Wednesday 17 September 2025 16:02:09 +0000 (0:00:00.063) 0:00:56.434 *** 2025-09-17 16:04:06.124548 | orchestrator | 2025-09-17 16:04:06.124558 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124569 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.061) 0:00:56.496 *** 2025-09-17 16:04:06.124579 | orchestrator | 2025-09-17 16:04:06.124590 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124600 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.171) 0:00:56.667 *** 2025-09-17 16:04:06.124611 | orchestrator | 2025-09-17 16:04:06.124622 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124632 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.059) 0:00:56.726 *** 2025-09-17 16:04:06.124642 | orchestrator | 2025-09-17 16:04:06.124653 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124664 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.061) 0:00:56.788 *** 2025-09-17 16:04:06.124674 | orchestrator | 2025-09-17 16:04:06.124685 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-17 16:04:06.124695 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.059) 0:00:56.847 *** 2025-09-17 16:04:06.124706 | orchestrator | 2025-09-17 16:04:06.124717 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-17 16:04:06.124727 | orchestrator | Wednesday 17 September 2025 16:02:10 +0000 (0:00:00.082) 0:00:56.930 *** 2025-09-17 16:04:06.124743 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.124754 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.124765 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.124775 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.124786 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.124797 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.124807 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.124818 | orchestrator | 2025-09-17 16:04:06.124829 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-17 16:04:06.124840 | orchestrator | Wednesday 17 September 2025 16:02:49 +0000 (0:00:39.451) 0:01:36.381 *** 2025-09-17 16:04:06.124856 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.124867 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.124877 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.124888 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.124898 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.124909 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.124919 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.124929 | orchestrator | 2025-09-17 16:04:06.124940 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-17 16:04:06.124951 | orchestrator | Wednesday 17 September 2025 16:03:54 +0000 (0:01:04.174) 0:02:40.556 *** 2025-09-17 16:04:06.124962 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:06.124972 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:06.124983 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:06.124993 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:06.125004 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:06.125014 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:06.125025 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:06.125035 | orchestrator | 2025-09-17 16:04:06.125050 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-17 16:04:06.125061 | orchestrator | Wednesday 17 September 2025 16:03:56 +0000 (0:00:01.970) 0:02:42.526 *** 2025-09-17 16:04:06.125072 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:06.125082 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:06.125093 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:06.125103 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:06.125114 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:06.125124 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:06.125135 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:06.125146 | orchestrator | 2025-09-17 16:04:06.125157 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:04:06.125168 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125235 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125248 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125259 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125270 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125281 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125291 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-17 16:04:06.125302 | orchestrator | 2025-09-17 16:04:06.125313 | orchestrator | 2025-09-17 16:04:06.125324 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:04:06.125335 | orchestrator | Wednesday 17 September 2025 16:04:05 +0000 (0:00:09.313) 0:02:51.840 *** 2025-09-17 16:04:06.125345 | orchestrator | =============================================================================== 2025-09-17 16:04:06.125356 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 64.17s 2025-09-17 16:04:06.125367 | orchestrator | common : Restart fluentd container ------------------------------------- 39.45s 2025-09-17 16:04:06.125377 | orchestrator | common : Restart cron container ----------------------------------------- 9.31s 2025-09-17 16:04:06.125388 | orchestrator | common : Copying over config.json files for services -------------------- 6.35s 2025-09-17 16:04:06.125406 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.40s 2025-09-17 16:04:06.125416 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.12s 2025-09-17 16:04:06.125427 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.91s 2025-09-17 16:04:06.125437 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.85s 2025-09-17 16:04:06.125448 | orchestrator | common : Check common containers ---------------------------------------- 3.70s 2025-09-17 16:04:06.125459 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.30s 2025-09-17 16:04:06.125469 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.04s 2025-09-17 16:04:06.125480 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.69s 2025-09-17 16:04:06.125490 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.66s 2025-09-17 16:04:06.125501 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.61s 2025-09-17 16:04:06.125518 | orchestrator | common : Creating log volume -------------------------------------------- 2.19s 2025-09-17 16:04:06.125529 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2025-09-17 16:04:06.125540 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.64s 2025-09-17 16:04:06.125550 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.35s 2025-09-17 16:04:06.125561 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-09-17 16:04:06.125598 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.22s 2025-09-17 16:04:06.125609 | orchestrator | 2025-09-17 16:04:06 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:06.125620 | orchestrator | 2025-09-17 16:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:09.168661 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:09.168748 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:09.168763 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:09.168792 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:09.168803 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:09.168815 | orchestrator | 2025-09-17 16:04:09 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:09.168826 | orchestrator | 2025-09-17 16:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:12.199718 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:12.199807 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:12.199823 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:12.199835 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:12.199845 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:12.199856 | orchestrator | 2025-09-17 16:04:12 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:12.199867 | orchestrator | 2025-09-17 16:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:15.228773 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:15.229364 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:15.230095 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:15.230895 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:15.231544 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:15.232116 | orchestrator | 2025-09-17 16:04:15 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:15.232352 | orchestrator | 2025-09-17 16:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:18.418667 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:18.420342 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:18.420745 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:18.421874 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:18.422738 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:18.423676 | orchestrator | 2025-09-17 16:04:18 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:18.423708 | orchestrator | 2025-09-17 16:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:21.486997 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:21.490926 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:21.495912 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:21.500872 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:21.504313 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:21.505325 | orchestrator | 2025-09-17 16:04:21 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:21.505357 | orchestrator | 2025-09-17 16:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:24.533941 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:24.534070 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:24.534485 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:24.535619 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:24.535657 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:24.538575 | orchestrator | 2025-09-17 16:04:24 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:24.538599 | orchestrator | 2025-09-17 16:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:27.571736 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:27.571824 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state STARTED 2025-09-17 16:04:27.571840 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:27.571851 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:27.571862 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:27.572104 | orchestrator | 2025-09-17 16:04:27 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:27.572129 | orchestrator | 2025-09-17 16:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:30.706131 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:30.706241 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task bff7fe51-86ab-4657-944c-0bb7e6280ddd is in state SUCCESS 2025-09-17 16:04:30.706439 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:30.707199 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:30.708282 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:30.710517 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:30.710754 | orchestrator | 2025-09-17 16:04:30 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state STARTED 2025-09-17 16:04:30.710774 | orchestrator | 2025-09-17 16:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:33.733634 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:33.734764 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:33.736609 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:33.737064 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:33.737649 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:33.738360 | orchestrator | 2025-09-17 16:04:33 | INFO  | Task 03996d77-eb4d-4dd2-8700-d2293eb8de81 is in state SUCCESS 2025-09-17 16:04:33.740090 | orchestrator | 2025-09-17 16:04:33.740117 | orchestrator | 2025-09-17 16:04:33.740129 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:04:33.740140 | orchestrator | 2025-09-17 16:04:33.740173 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:04:33.740184 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.285) 0:00:00.285 *** 2025-09-17 16:04:33.740195 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:33.740206 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:33.740217 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:33.740228 | orchestrator | 2025-09-17 16:04:33.740239 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:04:33.740249 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.473) 0:00:00.758 *** 2025-09-17 16:04:33.740261 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-17 16:04:33.740272 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-17 16:04:33.740303 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-17 16:04:33.740314 | orchestrator | 2025-09-17 16:04:33.740325 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-17 16:04:33.740335 | orchestrator | 2025-09-17 16:04:33.740346 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-17 16:04:33.740357 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.554) 0:00:01.313 *** 2025-09-17 16:04:33.740367 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:04:33.740378 | orchestrator | 2025-09-17 16:04:33.740389 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-17 16:04:33.740400 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:00.714) 0:00:02.028 *** 2025-09-17 16:04:33.740411 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-17 16:04:33.740431 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-17 16:04:33.740442 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-17 16:04:33.740453 | orchestrator | 2025-09-17 16:04:33.740463 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-17 16:04:33.740474 | orchestrator | Wednesday 17 September 2025 16:04:15 +0000 (0:00:01.014) 0:00:03.042 *** 2025-09-17 16:04:33.740485 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-17 16:04:33.740495 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-17 16:04:33.740506 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-17 16:04:33.740517 | orchestrator | 2025-09-17 16:04:33.740527 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-17 16:04:33.740538 | orchestrator | Wednesday 17 September 2025 16:04:17 +0000 (0:00:02.257) 0:00:05.300 *** 2025-09-17 16:04:33.740549 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:33.740560 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:33.740571 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:33.740582 | orchestrator | 2025-09-17 16:04:33.740593 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-17 16:04:33.740603 | orchestrator | Wednesday 17 September 2025 16:04:19 +0000 (0:00:02.264) 0:00:07.565 *** 2025-09-17 16:04:33.740614 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:33.740625 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:33.740635 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:33.740646 | orchestrator | 2025-09-17 16:04:33.740657 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:04:33.740668 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.740680 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.740690 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.740703 | orchestrator | 2025-09-17 16:04:33.740716 | orchestrator | 2025-09-17 16:04:33.740728 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:04:33.740740 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:07.751) 0:00:15.316 *** 2025-09-17 16:04:33.740752 | orchestrator | =============================================================================== 2025-09-17 16:04:33.740764 | orchestrator | memcached : Restart memcached container --------------------------------- 7.75s 2025-09-17 16:04:33.740777 | orchestrator | memcached : Check memcached container ----------------------------------- 2.26s 2025-09-17 16:04:33.740789 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.26s 2025-09-17 16:04:33.740801 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.01s 2025-09-17 16:04:33.740813 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.71s 2025-09-17 16:04:33.740832 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-17 16:04:33.740844 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-09-17 16:04:33.740857 | orchestrator | 2025-09-17 16:04:33.740868 | orchestrator | 2025-09-17 16:04:33.740880 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:04:33.740892 | orchestrator | 2025-09-17 16:04:33.740904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:04:33.740916 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.211) 0:00:00.211 *** 2025-09-17 16:04:33.740929 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:33.740941 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:33.740954 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:33.740967 | orchestrator | 2025-09-17 16:04:33.740979 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:04:33.741003 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.395) 0:00:00.606 *** 2025-09-17 16:04:33.741016 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-17 16:04:33.741028 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-17 16:04:33.741039 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-17 16:04:33.741049 | orchestrator | 2025-09-17 16:04:33.741060 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-17 16:04:33.741070 | orchestrator | 2025-09-17 16:04:33.741081 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-17 16:04:33.741091 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.822) 0:00:01.429 *** 2025-09-17 16:04:33.741102 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:04:33.741113 | orchestrator | 2025-09-17 16:04:33.741123 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-17 16:04:33.741134 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:00.569) 0:00:01.998 *** 2025-09-17 16:04:33.741166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741264 | orchestrator | 2025-09-17 16:04:33.741275 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-17 16:04:33.741286 | orchestrator | Wednesday 17 September 2025 16:04:15 +0000 (0:00:01.586) 0:00:03.585 *** 2025-09-17 16:04:33.741297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741384 | orchestrator | 2025-09-17 16:04:33.741394 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-17 16:04:33.741405 | orchestrator | Wednesday 17 September 2025 16:04:19 +0000 (0:00:03.252) 0:00:06.837 *** 2025-09-17 16:04:33.741416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741500 | orchestrator | 2025-09-17 16:04:33.741511 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-17 16:04:33.741521 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:02.785) 0:00:09.623 *** 2025-09-17 16:04:33.741532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-17 16:04:33.741616 | orchestrator | 2025-09-17 16:04:33.741627 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 16:04:33.741637 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:01.704) 0:00:11.327 *** 2025-09-17 16:04:33.741648 | orchestrator | 2025-09-17 16:04:33.741659 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 16:04:33.741670 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.067) 0:00:11.395 *** 2025-09-17 16:04:33.741680 | orchestrator | 2025-09-17 16:04:33.741691 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-17 16:04:33.741702 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.052) 0:00:11.448 *** 2025-09-17 16:04:33.741712 | orchestrator | 2025-09-17 16:04:33.741723 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-17 16:04:33.741734 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.084) 0:00:11.533 *** 2025-09-17 16:04:33.741744 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:33.741755 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:33.741766 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:33.741776 | orchestrator | 2025-09-17 16:04:33.741787 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-17 16:04:33.741798 | orchestrator | Wednesday 17 September 2025 16:04:28 +0000 (0:00:04.372) 0:00:15.905 *** 2025-09-17 16:04:33.741808 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:33.741819 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:33.741833 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:33.741844 | orchestrator | 2025-09-17 16:04:33.741855 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:04:33.741866 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.741877 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.741888 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:33.741898 | orchestrator | 2025-09-17 16:04:33.741909 | orchestrator | 2025-09-17 16:04:33.741919 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:04:33.741930 | orchestrator | Wednesday 17 September 2025 16:04:32 +0000 (0:00:04.242) 0:00:20.147 *** 2025-09-17 16:04:33.741940 | orchestrator | =============================================================================== 2025-09-17 16:04:33.741951 | orchestrator | redis : Restart redis container ----------------------------------------- 4.37s 2025-09-17 16:04:33.741962 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.24s 2025-09-17 16:04:33.741972 | orchestrator | redis : Copying over default config.json files -------------------------- 3.25s 2025-09-17 16:04:33.741983 | orchestrator | redis : Copying over redis config files --------------------------------- 2.79s 2025-09-17 16:04:33.741994 | orchestrator | redis : Check redis containers ------------------------------------------ 1.70s 2025-09-17 16:04:33.742004 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.59s 2025-09-17 16:04:33.742065 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-09-17 16:04:33.742090 | orchestrator | redis : include_tasks --------------------------------------------------- 0.57s 2025-09-17 16:04:33.742109 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-09-17 16:04:33.742129 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-09-17 16:04:33.742184 | orchestrator | 2025-09-17 16:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:36.769326 | orchestrator | 2025-09-17 16:04:36 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:36.770763 | orchestrator | 2025-09-17 16:04:36 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:36.771279 | orchestrator | 2025-09-17 16:04:36 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:36.771872 | orchestrator | 2025-09-17 16:04:36 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:36.773209 | orchestrator | 2025-09-17 16:04:36 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:36.773242 | orchestrator | 2025-09-17 16:04:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:40.021203 | orchestrator | 2025-09-17 16:04:40 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:40.022530 | orchestrator | 2025-09-17 16:04:40 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:40.023323 | orchestrator | 2025-09-17 16:04:40 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:40.032461 | orchestrator | 2025-09-17 16:04:40 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:40.044250 | orchestrator | 2025-09-17 16:04:40 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:40.044300 | orchestrator | 2025-09-17 16:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:43.344304 | orchestrator | 2025-09-17 16:04:43 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:43.344405 | orchestrator | 2025-09-17 16:04:43 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:43.344421 | orchestrator | 2025-09-17 16:04:43 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:43.344433 | orchestrator | 2025-09-17 16:04:43 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:43.344444 | orchestrator | 2025-09-17 16:04:43 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:43.344455 | orchestrator | 2025-09-17 16:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:46.375856 | orchestrator | 2025-09-17 16:04:46 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:46.375933 | orchestrator | 2025-09-17 16:04:46 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:46.375946 | orchestrator | 2025-09-17 16:04:46 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:46.375972 | orchestrator | 2025-09-17 16:04:46 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:46.377022 | orchestrator | 2025-09-17 16:04:46 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state STARTED 2025-09-17 16:04:46.377043 | orchestrator | 2025-09-17 16:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:49.596101 | orchestrator | 2025-09-17 16:04:49 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:49.596229 | orchestrator | 2025-09-17 16:04:49 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:49.596246 | orchestrator | 2025-09-17 16:04:49 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:49.596257 | orchestrator | 2025-09-17 16:04:49 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:49.596268 | orchestrator | 2025-09-17 16:04:49 | INFO  | Task 2b695f4b-2691-4f7d-a042-1817be521963 is in state SUCCESS 2025-09-17 16:04:49.596280 | orchestrator | 2025-09-17 16:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:49.597205 | orchestrator | 2025-09-17 16:04:49.597248 | orchestrator | 2025-09-17 16:04:49.597270 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-17 16:04:49.597424 | orchestrator | 2025-09-17 16:04:49.597451 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-17 16:04:49.597463 | orchestrator | Wednesday 17 September 2025 16:01:14 +0000 (0:00:00.177) 0:00:00.177 *** 2025-09-17 16:04:49.597474 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.597486 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.597497 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.597507 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.597518 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.597529 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.597539 | orchestrator | 2025-09-17 16:04:49.597550 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-17 16:04:49.597565 | orchestrator | Wednesday 17 September 2025 16:01:14 +0000 (0:00:00.626) 0:00:00.804 *** 2025-09-17 16:04:49.597577 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.597588 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.597599 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.597610 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.597620 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.597631 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.597664 | orchestrator | 2025-09-17 16:04:49.597675 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-17 16:04:49.597686 | orchestrator | Wednesday 17 September 2025 16:01:15 +0000 (0:00:00.613) 0:00:01.417 *** 2025-09-17 16:04:49.597696 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.597707 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.597717 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.597727 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.597738 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.597748 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.597759 | orchestrator | 2025-09-17 16:04:49.597769 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-17 16:04:49.597780 | orchestrator | Wednesday 17 September 2025 16:01:16 +0000 (0:00:00.664) 0:00:02.082 *** 2025-09-17 16:04:49.597791 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.597801 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.597811 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.597822 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.597832 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.597842 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.597853 | orchestrator | 2025-09-17 16:04:49.597863 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-17 16:04:49.597874 | orchestrator | Wednesday 17 September 2025 16:01:17 +0000 (0:00:01.766) 0:00:03.849 *** 2025-09-17 16:04:49.597884 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.597895 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.597905 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.597916 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.597926 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.597937 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.597947 | orchestrator | 2025-09-17 16:04:49.597958 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-17 16:04:49.597969 | orchestrator | Wednesday 17 September 2025 16:01:18 +0000 (0:00:00.949) 0:00:04.798 *** 2025-09-17 16:04:49.597979 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.597990 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.598000 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.598011 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.598074 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.598087 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.598099 | orchestrator | 2025-09-17 16:04:49.598111 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-17 16:04:49.598172 | orchestrator | Wednesday 17 September 2025 16:01:19 +0000 (0:00:00.932) 0:00:05.730 *** 2025-09-17 16:04:49.598188 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.598200 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.598212 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.598224 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.598236 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.598248 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.598260 | orchestrator | 2025-09-17 16:04:49.598272 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-17 16:04:49.598284 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:00.705) 0:00:06.436 *** 2025-09-17 16:04:49.598296 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.598308 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.598320 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.598332 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.598344 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.598356 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.598367 | orchestrator | 2025-09-17 16:04:49.598391 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-17 16:04:49.598404 | orchestrator | Wednesday 17 September 2025 16:01:21 +0000 (0:00:00.824) 0:00:07.261 *** 2025-09-17 16:04:49.598425 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598436 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598446 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.598457 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598468 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598478 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.598489 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598499 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598510 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.598520 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598543 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598554 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.598564 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598575 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598585 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.598596 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:04:49.598606 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:04:49.598617 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.598627 | orchestrator | 2025-09-17 16:04:49.598637 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-17 16:04:49.598648 | orchestrator | Wednesday 17 September 2025 16:01:22 +0000 (0:00:00.677) 0:00:07.938 *** 2025-09-17 16:04:49.598658 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.598669 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.598679 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.598690 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.598700 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.598710 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.598721 | orchestrator | 2025-09-17 16:04:49.598731 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-17 16:04:49.598742 | orchestrator | Wednesday 17 September 2025 16:01:23 +0000 (0:00:01.522) 0:00:09.461 *** 2025-09-17 16:04:49.598753 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.598763 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.598774 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.598784 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.598795 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.598805 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.598816 | orchestrator | 2025-09-17 16:04:49.598826 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-17 16:04:49.598837 | orchestrator | Wednesday 17 September 2025 16:01:24 +0000 (0:00:00.817) 0:00:10.279 *** 2025-09-17 16:04:49.598847 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.598858 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.598868 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.598878 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.598889 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.598899 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.598910 | orchestrator | 2025-09-17 16:04:49.598920 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-17 16:04:49.598931 | orchestrator | Wednesday 17 September 2025 16:01:29 +0000 (0:00:05.557) 0:00:15.837 *** 2025-09-17 16:04:49.598941 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.598952 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.598968 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.598978 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.598989 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.598999 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.599010 | orchestrator | 2025-09-17 16:04:49.599020 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-17 16:04:49.599031 | orchestrator | Wednesday 17 September 2025 16:01:31 +0000 (0:00:01.215) 0:00:17.053 *** 2025-09-17 16:04:49.599041 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.599052 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.599062 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.599073 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.599083 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.599096 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.599115 | orchestrator | 2025-09-17 16:04:49.599171 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-17 16:04:49.599193 | orchestrator | Wednesday 17 September 2025 16:01:33 +0000 (0:00:02.015) 0:00:19.068 *** 2025-09-17 16:04:49.599211 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.599231 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.599248 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.599267 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.599286 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.599305 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.599322 | orchestrator | 2025-09-17 16:04:49.599342 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-17 16:04:49.599360 | orchestrator | Wednesday 17 September 2025 16:01:34 +0000 (0:00:00.828) 0:00:19.897 *** 2025-09-17 16:04:49.599372 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-17 16:04:49.599383 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-17 16:04:49.599395 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-17 16:04:49.599423 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-17 16:04:49.599442 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-17 16:04:49.599462 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-17 16:04:49.599481 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-17 16:04:49.599492 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-17 16:04:49.599503 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-17 16:04:49.599513 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-17 16:04:49.599524 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-17 16:04:49.599534 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-17 16:04:49.599545 | orchestrator | 2025-09-17 16:04:49.599555 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-17 16:04:49.599566 | orchestrator | Wednesday 17 September 2025 16:01:36 +0000 (0:00:02.511) 0:00:22.408 *** 2025-09-17 16:04:49.599576 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.599587 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.599597 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.599608 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.599618 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.599629 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.599639 | orchestrator | 2025-09-17 16:04:49.599661 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-17 16:04:49.599672 | orchestrator | 2025-09-17 16:04:49.599683 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-17 16:04:49.599694 | orchestrator | Wednesday 17 September 2025 16:01:38 +0000 (0:00:01.857) 0:00:24.266 *** 2025-09-17 16:04:49.599704 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.599715 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.599735 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.599745 | orchestrator | 2025-09-17 16:04:49.599756 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-17 16:04:49.599767 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:01.372) 0:00:25.638 *** 2025-09-17 16:04:49.599778 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.599788 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.599799 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.599813 | orchestrator | 2025-09-17 16:04:49.599832 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-17 16:04:49.599851 | orchestrator | Wednesday 17 September 2025 16:01:41 +0000 (0:00:01.240) 0:00:26.878 *** 2025-09-17 16:04:49.599870 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.599889 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.599908 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.599928 | orchestrator | 2025-09-17 16:04:49.599947 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-17 16:04:49.599961 | orchestrator | Wednesday 17 September 2025 16:01:42 +0000 (0:00:01.132) 0:00:28.011 *** 2025-09-17 16:04:49.599972 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.599983 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.599993 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.600003 | orchestrator | 2025-09-17 16:04:49.600014 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-17 16:04:49.600024 | orchestrator | Wednesday 17 September 2025 16:01:42 +0000 (0:00:00.668) 0:00:28.680 *** 2025-09-17 16:04:49.600035 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.600045 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600056 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600066 | orchestrator | 2025-09-17 16:04:49.600077 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-17 16:04:49.600088 | orchestrator | Wednesday 17 September 2025 16:01:43 +0000 (0:00:00.397) 0:00:29.078 *** 2025-09-17 16:04:49.600098 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.600108 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.600119 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.600191 | orchestrator | 2025-09-17 16:04:49.600203 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-17 16:04:49.600213 | orchestrator | Wednesday 17 September 2025 16:01:44 +0000 (0:00:00.927) 0:00:30.005 *** 2025-09-17 16:04:49.600224 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.600235 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.600245 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.600256 | orchestrator | 2025-09-17 16:04:49.600267 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-17 16:04:49.600277 | orchestrator | Wednesday 17 September 2025 16:01:45 +0000 (0:00:01.832) 0:00:31.838 *** 2025-09-17 16:04:49.600288 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:04:49.600298 | orchestrator | 2025-09-17 16:04:49.600309 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-17 16:04:49.600320 | orchestrator | Wednesday 17 September 2025 16:01:46 +0000 (0:00:00.803) 0:00:32.641 *** 2025-09-17 16:04:49.600330 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.600341 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.600351 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.600362 | orchestrator | 2025-09-17 16:04:49.600373 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-17 16:04:49.600383 | orchestrator | Wednesday 17 September 2025 16:01:50 +0000 (0:00:03.338) 0:00:35.980 *** 2025-09-17 16:04:49.600394 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600404 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.600415 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600426 | orchestrator | 2025-09-17 16:04:49.600436 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-17 16:04:49.600456 | orchestrator | Wednesday 17 September 2025 16:01:51 +0000 (0:00:01.023) 0:00:37.003 *** 2025-09-17 16:04:49.600467 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600477 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600487 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.600498 | orchestrator | 2025-09-17 16:04:49.600508 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-17 16:04:49.600519 | orchestrator | Wednesday 17 September 2025 16:01:52 +0000 (0:00:00.944) 0:00:37.947 *** 2025-09-17 16:04:49.600535 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600546 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600556 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.600567 | orchestrator | 2025-09-17 16:04:49.600576 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-17 16:04:49.600586 | orchestrator | Wednesday 17 September 2025 16:01:53 +0000 (0:00:01.915) 0:00:39.863 *** 2025-09-17 16:04:49.600595 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.600605 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600614 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600623 | orchestrator | 2025-09-17 16:04:49.600632 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-17 16:04:49.600642 | orchestrator | Wednesday 17 September 2025 16:01:54 +0000 (0:00:00.459) 0:00:40.323 *** 2025-09-17 16:04:49.600651 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.600661 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600670 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600679 | orchestrator | 2025-09-17 16:04:49.600688 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-17 16:04:49.600698 | orchestrator | Wednesday 17 September 2025 16:01:55 +0000 (0:00:00.584) 0:00:40.907 *** 2025-09-17 16:04:49.600707 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.600716 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.600726 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.600735 | orchestrator | 2025-09-17 16:04:49.600753 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-17 16:04:49.600763 | orchestrator | Wednesday 17 September 2025 16:01:57 +0000 (0:00:02.013) 0:00:42.921 *** 2025-09-17 16:04:49.600772 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 16:04:49.600782 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 16:04:49.600792 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-17 16:04:49.600802 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 16:04:49.600811 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 16:04:49.600820 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-17 16:04:49.600830 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 16:04:49.600839 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 16:04:49.600849 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-17 16:04:49.600858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 16:04:49.600873 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 16:04:49.600883 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-17 16:04:49.600892 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-17 16:04:49.600902 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-17 16:04:49.600911 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.600920 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.600930 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.600939 | orchestrator | 2025-09-17 16:04:49.600949 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-17 16:04:49.600958 | orchestrator | Wednesday 17 September 2025 16:02:52 +0000 (0:00:55.114) 0:01:38.036 *** 2025-09-17 16:04:49.600968 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.600977 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.600986 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.600996 | orchestrator | 2025-09-17 16:04:49.601005 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-17 16:04:49.601014 | orchestrator | Wednesday 17 September 2025 16:02:52 +0000 (0:00:00.269) 0:01:38.305 *** 2025-09-17 16:04:49.601024 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601033 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601042 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601052 | orchestrator | 2025-09-17 16:04:49.601061 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-17 16:04:49.601070 | orchestrator | Wednesday 17 September 2025 16:02:53 +0000 (0:00:01.102) 0:01:39.407 *** 2025-09-17 16:04:49.601080 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601089 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601098 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601108 | orchestrator | 2025-09-17 16:04:49.601121 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-17 16:04:49.601150 | orchestrator | Wednesday 17 September 2025 16:02:54 +0000 (0:00:01.392) 0:01:40.799 *** 2025-09-17 16:04:49.601160 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601169 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601179 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601188 | orchestrator | 2025-09-17 16:04:49.601197 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-17 16:04:49.601207 | orchestrator | Wednesday 17 September 2025 16:03:20 +0000 (0:00:25.988) 0:02:06.788 *** 2025-09-17 16:04:49.601216 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601226 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601235 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601244 | orchestrator | 2025-09-17 16:04:49.601254 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-17 16:04:49.601263 | orchestrator | Wednesday 17 September 2025 16:03:21 +0000 (0:00:00.673) 0:02:07.461 *** 2025-09-17 16:04:49.601272 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601282 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601291 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601300 | orchestrator | 2025-09-17 16:04:49.601310 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-17 16:04:49.601319 | orchestrator | Wednesday 17 September 2025 16:03:22 +0000 (0:00:00.813) 0:02:08.275 *** 2025-09-17 16:04:49.601334 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601344 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601353 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601363 | orchestrator | 2025-09-17 16:04:49.601372 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-17 16:04:49.601388 | orchestrator | Wednesday 17 September 2025 16:03:23 +0000 (0:00:00.651) 0:02:08.927 *** 2025-09-17 16:04:49.601397 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601407 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601416 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601425 | orchestrator | 2025-09-17 16:04:49.601435 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-17 16:04:49.601444 | orchestrator | Wednesday 17 September 2025 16:03:23 +0000 (0:00:00.635) 0:02:09.562 *** 2025-09-17 16:04:49.601454 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601463 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601472 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601482 | orchestrator | 2025-09-17 16:04:49.601491 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-17 16:04:49.601501 | orchestrator | Wednesday 17 September 2025 16:03:24 +0000 (0:00:00.406) 0:02:09.969 *** 2025-09-17 16:04:49.601510 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601519 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601529 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601538 | orchestrator | 2025-09-17 16:04:49.601547 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-17 16:04:49.601557 | orchestrator | Wednesday 17 September 2025 16:03:24 +0000 (0:00:00.825) 0:02:10.794 *** 2025-09-17 16:04:49.601566 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601576 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601585 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601594 | orchestrator | 2025-09-17 16:04:49.601604 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-17 16:04:49.601613 | orchestrator | Wednesday 17 September 2025 16:03:25 +0000 (0:00:00.656) 0:02:11.451 *** 2025-09-17 16:04:49.601623 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601632 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601642 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601651 | orchestrator | 2025-09-17 16:04:49.601660 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-17 16:04:49.601670 | orchestrator | Wednesday 17 September 2025 16:03:26 +0000 (0:00:00.845) 0:02:12.296 *** 2025-09-17 16:04:49.601683 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:04:49.601700 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:04:49.601717 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:04:49.601734 | orchestrator | 2025-09-17 16:04:49.601751 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-17 16:04:49.601761 | orchestrator | Wednesday 17 September 2025 16:03:27 +0000 (0:00:00.754) 0:02:13.051 *** 2025-09-17 16:04:49.601771 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.601780 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.601790 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.601799 | orchestrator | 2025-09-17 16:04:49.601808 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-17 16:04:49.601818 | orchestrator | Wednesday 17 September 2025 16:03:27 +0000 (0:00:00.505) 0:02:13.556 *** 2025-09-17 16:04:49.601828 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.601837 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.601846 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.601856 | orchestrator | 2025-09-17 16:04:49.601865 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-17 16:04:49.601875 | orchestrator | Wednesday 17 September 2025 16:03:27 +0000 (0:00:00.305) 0:02:13.861 *** 2025-09-17 16:04:49.601884 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601894 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601903 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601912 | orchestrator | 2025-09-17 16:04:49.601922 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-17 16:04:49.601931 | orchestrator | Wednesday 17 September 2025 16:03:28 +0000 (0:00:00.684) 0:02:14.546 *** 2025-09-17 16:04:49.601946 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.601956 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.601966 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.601975 | orchestrator | 2025-09-17 16:04:49.601984 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-17 16:04:49.601994 | orchestrator | Wednesday 17 September 2025 16:03:29 +0000 (0:00:00.605) 0:02:15.151 *** 2025-09-17 16:04:49.602004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 16:04:49.602064 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 16:04:49.602078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-17 16:04:49.602088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 16:04:49.602097 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 16:04:49.602106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-17 16:04:49.602116 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 16:04:49.602170 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 16:04:49.602183 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-17 16:04:49.602192 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-17 16:04:49.602202 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 16:04:49.602218 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 16:04:49.602228 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-17 16:04:49.602237 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 16:04:49.602246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 16:04:49.602256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-17 16:04:49.602265 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 16:04:49.602274 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 16:04:49.602284 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-17 16:04:49.602293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-17 16:04:49.602303 | orchestrator | 2025-09-17 16:04:49.602312 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-17 16:04:49.602321 | orchestrator | 2025-09-17 16:04:49.602331 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-17 16:04:49.602340 | orchestrator | Wednesday 17 September 2025 16:03:32 +0000 (0:00:03.124) 0:02:18.276 *** 2025-09-17 16:04:49.602349 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.602359 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.602368 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.602378 | orchestrator | 2025-09-17 16:04:49.602387 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-17 16:04:49.602397 | orchestrator | Wednesday 17 September 2025 16:03:32 +0000 (0:00:00.367) 0:02:18.643 *** 2025-09-17 16:04:49.602406 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.602416 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.602425 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.602440 | orchestrator | 2025-09-17 16:04:49.602448 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-17 16:04:49.602455 | orchestrator | Wednesday 17 September 2025 16:03:33 +0000 (0:00:00.604) 0:02:19.247 *** 2025-09-17 16:04:49.602463 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.602471 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.602478 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.602486 | orchestrator | 2025-09-17 16:04:49.602493 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-17 16:04:49.602501 | orchestrator | Wednesday 17 September 2025 16:03:33 +0000 (0:00:00.548) 0:02:19.795 *** 2025-09-17 16:04:49.602509 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:04:49.602517 | orchestrator | 2025-09-17 16:04:49.602524 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-17 16:04:49.602532 | orchestrator | Wednesday 17 September 2025 16:03:34 +0000 (0:00:00.536) 0:02:20.332 *** 2025-09-17 16:04:49.602540 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.602547 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.602555 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.602563 | orchestrator | 2025-09-17 16:04:49.602570 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-17 16:04:49.602578 | orchestrator | Wednesday 17 September 2025 16:03:34 +0000 (0:00:00.323) 0:02:20.655 *** 2025-09-17 16:04:49.602586 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.602593 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.602601 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.602608 | orchestrator | 2025-09-17 16:04:49.602616 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-17 16:04:49.602624 | orchestrator | Wednesday 17 September 2025 16:03:35 +0000 (0:00:00.506) 0:02:21.161 *** 2025-09-17 16:04:49.602631 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.602639 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.602647 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.602654 | orchestrator | 2025-09-17 16:04:49.602662 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-17 16:04:49.602670 | orchestrator | Wednesday 17 September 2025 16:03:35 +0000 (0:00:00.309) 0:02:21.471 *** 2025-09-17 16:04:49.602678 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.602685 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.602693 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.602701 | orchestrator | 2025-09-17 16:04:49.602713 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-17 16:04:49.602720 | orchestrator | Wednesday 17 September 2025 16:03:36 +0000 (0:00:00.623) 0:02:22.094 *** 2025-09-17 16:04:49.602728 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.602736 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.602743 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.602751 | orchestrator | 2025-09-17 16:04:49.602759 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-17 16:04:49.602766 | orchestrator | Wednesday 17 September 2025 16:03:37 +0000 (0:00:01.059) 0:02:23.154 *** 2025-09-17 16:04:49.602774 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.602782 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.602789 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.602797 | orchestrator | 2025-09-17 16:04:49.602804 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-17 16:04:49.602812 | orchestrator | Wednesday 17 September 2025 16:03:38 +0000 (0:00:01.388) 0:02:24.542 *** 2025-09-17 16:04:49.602820 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:04:49.602827 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:04:49.602835 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:04:49.602842 | orchestrator | 2025-09-17 16:04:49.602850 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-17 16:04:49.602862 | orchestrator | 2025-09-17 16:04:49.602874 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-17 16:04:49.602882 | orchestrator | Wednesday 17 September 2025 16:03:50 +0000 (0:00:11.787) 0:02:36.329 *** 2025-09-17 16:04:49.602890 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.602898 | orchestrator | 2025-09-17 16:04:49.602905 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-17 16:04:49.602913 | orchestrator | Wednesday 17 September 2025 16:03:51 +0000 (0:00:00.871) 0:02:37.201 *** 2025-09-17 16:04:49.602921 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.602928 | orchestrator | 2025-09-17 16:04:49.602936 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 16:04:49.602944 | orchestrator | Wednesday 17 September 2025 16:03:51 +0000 (0:00:00.439) 0:02:37.641 *** 2025-09-17 16:04:49.602951 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 16:04:49.602959 | orchestrator | 2025-09-17 16:04:49.602967 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 16:04:49.602974 | orchestrator | Wednesday 17 September 2025 16:03:52 +0000 (0:00:00.563) 0:02:38.204 *** 2025-09-17 16:04:49.602982 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.602989 | orchestrator | 2025-09-17 16:04:49.602997 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-17 16:04:49.603005 | orchestrator | Wednesday 17 September 2025 16:03:53 +0000 (0:00:00.869) 0:02:39.074 *** 2025-09-17 16:04:49.603013 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603020 | orchestrator | 2025-09-17 16:04:49.603028 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-17 16:04:49.603036 | orchestrator | Wednesday 17 September 2025 16:03:54 +0000 (0:00:00.970) 0:02:40.044 *** 2025-09-17 16:04:49.603043 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 16:04:49.603051 | orchestrator | 2025-09-17 16:04:49.603059 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-17 16:04:49.603066 | orchestrator | Wednesday 17 September 2025 16:03:55 +0000 (0:00:01.436) 0:02:41.480 *** 2025-09-17 16:04:49.603074 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 16:04:49.603082 | orchestrator | 2025-09-17 16:04:49.603089 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-17 16:04:49.603097 | orchestrator | Wednesday 17 September 2025 16:03:56 +0000 (0:00:00.771) 0:02:42.252 *** 2025-09-17 16:04:49.603105 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603112 | orchestrator | 2025-09-17 16:04:49.603120 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-17 16:04:49.603146 | orchestrator | Wednesday 17 September 2025 16:03:56 +0000 (0:00:00.388) 0:02:42.640 *** 2025-09-17 16:04:49.603154 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603161 | orchestrator | 2025-09-17 16:04:49.603169 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-17 16:04:49.603177 | orchestrator | 2025-09-17 16:04:49.603184 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-17 16:04:49.603192 | orchestrator | Wednesday 17 September 2025 16:03:57 +0000 (0:00:00.493) 0:02:43.134 *** 2025-09-17 16:04:49.603200 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.603208 | orchestrator | 2025-09-17 16:04:49.603215 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-17 16:04:49.603223 | orchestrator | Wednesday 17 September 2025 16:03:57 +0000 (0:00:00.152) 0:02:43.287 *** 2025-09-17 16:04:49.603231 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 16:04:49.603239 | orchestrator | 2025-09-17 16:04:49.603246 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-17 16:04:49.603254 | orchestrator | Wednesday 17 September 2025 16:03:57 +0000 (0:00:00.213) 0:02:43.500 *** 2025-09-17 16:04:49.603261 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.603269 | orchestrator | 2025-09-17 16:04:49.603281 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-17 16:04:49.603289 | orchestrator | Wednesday 17 September 2025 16:03:58 +0000 (0:00:00.847) 0:02:44.348 *** 2025-09-17 16:04:49.603296 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.603304 | orchestrator | 2025-09-17 16:04:49.603312 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-17 16:04:49.603319 | orchestrator | Wednesday 17 September 2025 16:04:00 +0000 (0:00:01.616) 0:02:45.965 *** 2025-09-17 16:04:49.603327 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603335 | orchestrator | 2025-09-17 16:04:49.603343 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-17 16:04:49.603350 | orchestrator | Wednesday 17 September 2025 16:04:00 +0000 (0:00:00.760) 0:02:46.726 *** 2025-09-17 16:04:49.603358 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.603365 | orchestrator | 2025-09-17 16:04:49.603376 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-17 16:04:49.603384 | orchestrator | Wednesday 17 September 2025 16:04:01 +0000 (0:00:00.433) 0:02:47.159 *** 2025-09-17 16:04:49.603392 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603399 | orchestrator | 2025-09-17 16:04:49.603407 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-17 16:04:49.603415 | orchestrator | Wednesday 17 September 2025 16:04:08 +0000 (0:00:07.243) 0:02:54.403 *** 2025-09-17 16:04:49.603422 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.603430 | orchestrator | 2025-09-17 16:04:49.603438 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-17 16:04:49.603446 | orchestrator | Wednesday 17 September 2025 16:04:19 +0000 (0:00:11.188) 0:03:05.592 *** 2025-09-17 16:04:49.603453 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.603461 | orchestrator | 2025-09-17 16:04:49.603469 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-17 16:04:49.603477 | orchestrator | 2025-09-17 16:04:49.603484 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-17 16:04:49.603492 | orchestrator | Wednesday 17 September 2025 16:04:20 +0000 (0:00:00.589) 0:03:06.181 *** 2025-09-17 16:04:49.603500 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.603507 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.603515 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.603523 | orchestrator | 2025-09-17 16:04:49.603535 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-17 16:04:49.603543 | orchestrator | Wednesday 17 September 2025 16:04:20 +0000 (0:00:00.530) 0:03:06.712 *** 2025-09-17 16:04:49.603550 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603558 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.603566 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.603573 | orchestrator | 2025-09-17 16:04:49.603581 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-17 16:04:49.603589 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:00.352) 0:03:07.064 *** 2025-09-17 16:04:49.603596 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:04:49.603604 | orchestrator | 2025-09-17 16:04:49.603612 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-17 16:04:49.603619 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:00.600) 0:03:07.665 *** 2025-09-17 16:04:49.603627 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603635 | orchestrator | 2025-09-17 16:04:49.603642 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-17 16:04:49.603650 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:00.178) 0:03:07.843 *** 2025-09-17 16:04:49.603658 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603665 | orchestrator | 2025-09-17 16:04:49.603673 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-17 16:04:49.603681 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:00.219) 0:03:08.063 *** 2025-09-17 16:04:49.603694 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603701 | orchestrator | 2025-09-17 16:04:49.603709 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-17 16:04:49.603717 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:00.501) 0:03:08.564 *** 2025-09-17 16:04:49.603725 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603732 | orchestrator | 2025-09-17 16:04:49.603740 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-17 16:04:49.603748 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:00.185) 0:03:08.749 *** 2025-09-17 16:04:49.603755 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603763 | orchestrator | 2025-09-17 16:04:49.603771 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-17 16:04:49.603778 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.172) 0:03:08.922 *** 2025-09-17 16:04:49.603786 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603794 | orchestrator | 2025-09-17 16:04:49.603801 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-17 16:04:49.603809 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.197) 0:03:09.119 *** 2025-09-17 16:04:49.603817 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603824 | orchestrator | 2025-09-17 16:04:49.603832 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-17 16:04:49.603840 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.191) 0:03:09.310 *** 2025-09-17 16:04:49.603847 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603855 | orchestrator | 2025-09-17 16:04:49.603863 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-17 16:04:49.603870 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.191) 0:03:09.502 *** 2025-09-17 16:04:49.603878 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603886 | orchestrator | 2025-09-17 16:04:49.603893 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-17 16:04:49.603901 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.183) 0:03:09.686 *** 2025-09-17 16:04:49.603909 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-17 16:04:49.603917 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-17 16:04:49.603924 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603932 | orchestrator | 2025-09-17 16:04:49.603940 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-17 16:04:49.603947 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.270) 0:03:09.957 *** 2025-09-17 16:04:49.603955 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603962 | orchestrator | 2025-09-17 16:04:49.603970 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-17 16:04:49.603978 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.199) 0:03:10.156 *** 2025-09-17 16:04:49.603985 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.603993 | orchestrator | 2025-09-17 16:04:49.604003 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-17 16:04:49.604020 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.202) 0:03:10.359 *** 2025-09-17 16:04:49.604034 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604046 | orchestrator | 2025-09-17 16:04:49.604058 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-17 16:04:49.604072 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.196) 0:03:10.555 *** 2025-09-17 16:04:49.604085 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604098 | orchestrator | 2025-09-17 16:04:49.604107 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-17 16:04:49.604114 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.181) 0:03:10.736 *** 2025-09-17 16:04:49.604122 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604152 | orchestrator | 2025-09-17 16:04:49.604160 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-17 16:04:49.604167 | orchestrator | Wednesday 17 September 2025 16:04:25 +0000 (0:00:00.556) 0:03:11.293 *** 2025-09-17 16:04:49.604175 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604183 | orchestrator | 2025-09-17 16:04:49.604190 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-17 16:04:49.604198 | orchestrator | Wednesday 17 September 2025 16:04:25 +0000 (0:00:00.166) 0:03:11.459 *** 2025-09-17 16:04:49.604206 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604214 | orchestrator | 2025-09-17 16:04:49.604227 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-17 16:04:49.604235 | orchestrator | Wednesday 17 September 2025 16:04:25 +0000 (0:00:00.182) 0:03:11.641 *** 2025-09-17 16:04:49.604243 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604250 | orchestrator | 2025-09-17 16:04:49.604258 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-17 16:04:49.604266 | orchestrator | Wednesday 17 September 2025 16:04:25 +0000 (0:00:00.165) 0:03:11.806 *** 2025-09-17 16:04:49.604274 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604281 | orchestrator | 2025-09-17 16:04:49.604289 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-17 16:04:49.604297 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:00.158) 0:03:11.965 *** 2025-09-17 16:04:49.604304 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604312 | orchestrator | 2025-09-17 16:04:49.604320 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-17 16:04:49.604327 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:00.164) 0:03:12.129 *** 2025-09-17 16:04:49.604335 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604343 | orchestrator | 2025-09-17 16:04:49.604350 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-17 16:04:49.604358 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:00.141) 0:03:12.270 *** 2025-09-17 16:04:49.604366 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-17 16:04:49.604374 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-17 16:04:49.604382 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-17 16:04:49.604389 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-17 16:04:49.604397 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604405 | orchestrator | 2025-09-17 16:04:49.604412 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-17 16:04:49.604420 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:00.384) 0:03:12.655 *** 2025-09-17 16:04:49.604428 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604435 | orchestrator | 2025-09-17 16:04:49.604443 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-17 16:04:49.604451 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.249) 0:03:12.904 *** 2025-09-17 16:04:49.604458 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604466 | orchestrator | 2025-09-17 16:04:49.604474 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-17 16:04:49.604482 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.184) 0:03:13.089 *** 2025-09-17 16:04:49.604489 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604497 | orchestrator | 2025-09-17 16:04:49.604505 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-17 16:04:49.604512 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.199) 0:03:13.289 *** 2025-09-17 16:04:49.604520 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604528 | orchestrator | 2025-09-17 16:04:49.604535 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-17 16:04:49.604543 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.189) 0:03:13.478 *** 2025-09-17 16:04:49.604558 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-17 16:04:49.604566 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-17 16:04:49.604574 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604581 | orchestrator | 2025-09-17 16:04:49.604589 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-17 16:04:49.604597 | orchestrator | Wednesday 17 September 2025 16:04:28 +0000 (0:00:00.421) 0:03:13.900 *** 2025-09-17 16:04:49.604605 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.604612 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.604620 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.604628 | orchestrator | 2025-09-17 16:04:49.604635 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-17 16:04:49.604643 | orchestrator | Wednesday 17 September 2025 16:04:28 +0000 (0:00:00.479) 0:03:14.380 *** 2025-09-17 16:04:49.604651 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.604658 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.604666 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.604674 | orchestrator | 2025-09-17 16:04:49.604682 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-17 16:04:49.604689 | orchestrator | 2025-09-17 16:04:49.604699 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-17 16:04:49.604717 | orchestrator | Wednesday 17 September 2025 16:04:29 +0000 (0:00:00.999) 0:03:15.379 *** 2025-09-17 16:04:49.604730 | orchestrator | ok: [testbed-manager] 2025-09-17 16:04:49.604738 | orchestrator | 2025-09-17 16:04:49.604746 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-17 16:04:49.604756 | orchestrator | Wednesday 17 September 2025 16:04:29 +0000 (0:00:00.100) 0:03:15.480 *** 2025-09-17 16:04:49.604769 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-17 16:04:49.604780 | orchestrator | 2025-09-17 16:04:49.604788 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-17 16:04:49.604796 | orchestrator | Wednesday 17 September 2025 16:04:29 +0000 (0:00:00.361) 0:03:15.842 *** 2025-09-17 16:04:49.604803 | orchestrator | changed: [testbed-manager] 2025-09-17 16:04:49.604811 | orchestrator | 2025-09-17 16:04:49.604819 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-17 16:04:49.604826 | orchestrator | 2025-09-17 16:04:49.604834 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-17 16:04:49.604841 | orchestrator | Wednesday 17 September 2025 16:04:35 +0000 (0:00:05.441) 0:03:21.283 *** 2025-09-17 16:04:49.604849 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:04:49.604857 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:04:49.604865 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:04:49.604877 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:04:49.604885 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:04:49.604893 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:04:49.604900 | orchestrator | 2025-09-17 16:04:49.604908 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-17 16:04:49.604916 | orchestrator | Wednesday 17 September 2025 16:04:35 +0000 (0:00:00.465) 0:03:21.748 *** 2025-09-17 16:04:49.604925 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 16:04:49.604939 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 16:04:49.604951 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-17 16:04:49.604964 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 16:04:49.604977 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 16:04:49.604991 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 16:04:49.605014 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-17 16:04:49.605029 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 16:04:49.605037 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 16:04:49.605044 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-17 16:04:49.605052 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 16:04:49.605060 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 16:04:49.605067 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-17 16:04:49.605075 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 16:04:49.605083 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 16:04:49.605090 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-17 16:04:49.605098 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 16:04:49.605106 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-17 16:04:49.605113 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 16:04:49.605121 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 16:04:49.605167 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-17 16:04:49.605176 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 16:04:49.605184 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 16:04:49.605191 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-17 16:04:49.605199 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 16:04:49.605206 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 16:04:49.605214 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-17 16:04:49.605222 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 16:04:49.605229 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 16:04:49.605237 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-17 16:04:49.605245 | orchestrator | 2025-09-17 16:04:49.605252 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-17 16:04:49.605260 | orchestrator | Wednesday 17 September 2025 16:04:47 +0000 (0:00:11.416) 0:03:33.164 *** 2025-09-17 16:04:49.605268 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.605275 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.605283 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.605290 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.605298 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.605306 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.605313 | orchestrator | 2025-09-17 16:04:49.605321 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-17 16:04:49.605329 | orchestrator | Wednesday 17 September 2025 16:04:48 +0000 (0:00:00.862) 0:03:34.027 *** 2025-09-17 16:04:49.605336 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:04:49.605344 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:04:49.605351 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:04:49.605359 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:04:49.605373 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:04:49.605380 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:04:49.605388 | orchestrator | 2025-09-17 16:04:49.605396 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:04:49.605404 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:04:49.605419 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-17 16:04:49.605427 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 16:04:49.605435 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 16:04:49.605449 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 16:04:49.605457 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 16:04:49.605465 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 16:04:49.605473 | orchestrator | 2025-09-17 16:04:49.605480 | orchestrator | 2025-09-17 16:04:49.605488 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:04:49.605496 | orchestrator | Wednesday 17 September 2025 16:04:48 +0000 (0:00:00.684) 0:03:34.711 *** 2025-09-17 16:04:49.605504 | orchestrator | =============================================================================== 2025-09-17 16:04:49.605511 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.11s 2025-09-17 16:04:49.605519 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.99s 2025-09-17 16:04:49.605527 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.79s 2025-09-17 16:04:49.605535 | orchestrator | Manage labels ---------------------------------------------------------- 11.42s 2025-09-17 16:04:49.605542 | orchestrator | kubectl : Install required packages ------------------------------------ 11.19s 2025-09-17 16:04:49.605550 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.24s 2025-09-17 16:04:49.605558 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.56s 2025-09-17 16:04:49.605565 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.44s 2025-09-17 16:04:49.605573 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.34s 2025-09-17 16:04:49.605581 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2025-09-17 16:04:49.605589 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.51s 2025-09-17 16:04:49.605596 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.02s 2025-09-17 16:04:49.605607 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.01s 2025-09-17 16:04:49.605620 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.92s 2025-09-17 16:04:49.605634 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.86s 2025-09-17 16:04:49.605648 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.83s 2025-09-17 16:04:49.605662 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.77s 2025-09-17 16:04:49.605675 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.62s 2025-09-17 16:04:49.605686 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.52s 2025-09-17 16:04:49.605698 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.44s 2025-09-17 16:04:52.557016 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:52.557450 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:52.559623 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task 70485d72-cdd1-4d05-94ae-1dd635da8a02 is in state STARTED 2025-09-17 16:04:52.560621 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:52.565846 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:52.566745 | orchestrator | 2025-09-17 16:04:52 | INFO  | Task 3087bf64-6793-4031-9777-d5958a12c95b is in state STARTED 2025-09-17 16:04:52.566767 | orchestrator | 2025-09-17 16:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:55.612274 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:55.612528 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:55.617140 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task 70485d72-cdd1-4d05-94ae-1dd635da8a02 is in state STARTED 2025-09-17 16:04:55.617190 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:55.618620 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:55.622520 | orchestrator | 2025-09-17 16:04:55 | INFO  | Task 3087bf64-6793-4031-9777-d5958a12c95b is in state STARTED 2025-09-17 16:04:55.623344 | orchestrator | 2025-09-17 16:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:04:58.710576 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:04:58.711322 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:04:58.711864 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task 70485d72-cdd1-4d05-94ae-1dd635da8a02 is in state STARTED 2025-09-17 16:04:58.713699 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:04:58.714263 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:04:58.714736 | orchestrator | 2025-09-17 16:04:58 | INFO  | Task 3087bf64-6793-4031-9777-d5958a12c95b is in state SUCCESS 2025-09-17 16:04:58.714832 | orchestrator | 2025-09-17 16:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:01.739924 | orchestrator | 2025-09-17 16:05:01 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:01.740539 | orchestrator | 2025-09-17 16:05:01 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:01.741248 | orchestrator | 2025-09-17 16:05:01 | INFO  | Task 70485d72-cdd1-4d05-94ae-1dd635da8a02 is in state SUCCESS 2025-09-17 16:05:01.742445 | orchestrator | 2025-09-17 16:05:01 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:05:01.743299 | orchestrator | 2025-09-17 16:05:01 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:01.743321 | orchestrator | 2025-09-17 16:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:04.777839 | orchestrator | 2025-09-17 16:05:04 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:04.779399 | orchestrator | 2025-09-17 16:05:04 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:04.779566 | orchestrator | 2025-09-17 16:05:04 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:05:04.780455 | orchestrator | 2025-09-17 16:05:04 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:04.780494 | orchestrator | 2025-09-17 16:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:07.807709 | orchestrator | 2025-09-17 16:05:07 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:07.807788 | orchestrator | 2025-09-17 16:05:07 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:07.807802 | orchestrator | 2025-09-17 16:05:07 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:05:07.807814 | orchestrator | 2025-09-17 16:05:07 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:07.807825 | orchestrator | 2025-09-17 16:05:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:10.827383 | orchestrator | 2025-09-17 16:05:10 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:10.827789 | orchestrator | 2025-09-17 16:05:10 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:10.828319 | orchestrator | 2025-09-17 16:05:10 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:05:10.832092 | orchestrator | 2025-09-17 16:05:10 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:10.832172 | orchestrator | 2025-09-17 16:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:13.857271 | orchestrator | 2025-09-17 16:05:13 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:13.859508 | orchestrator | 2025-09-17 16:05:13 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:13.862437 | orchestrator | 2025-09-17 16:05:13 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state STARTED 2025-09-17 16:05:13.863516 | orchestrator | 2025-09-17 16:05:13 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:13.863702 | orchestrator | 2025-09-17 16:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:16.900691 | orchestrator | 2025-09-17 16:05:16 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:16.907633 | orchestrator | 2025-09-17 16:05:16 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:16.907970 | orchestrator | 2025-09-17 16:05:16 | INFO  | Task 6612285c-cf5d-4674-a91a-c730cc214ec5 is in state SUCCESS 2025-09-17 16:05:16.911309 | orchestrator | 2025-09-17 16:05:16.911347 | orchestrator | 2025-09-17 16:05:16.911360 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-17 16:05:16.911372 | orchestrator | 2025-09-17 16:05:16.911383 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 16:05:16.911394 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:00.126) 0:00:00.126 *** 2025-09-17 16:05:16.911405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 16:05:16.911416 | orchestrator | 2025-09-17 16:05:16.911427 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 16:05:16.911438 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:00.750) 0:00:00.877 *** 2025-09-17 16:05:16.911466 | orchestrator | changed: [testbed-manager] 2025-09-17 16:05:16.911478 | orchestrator | 2025-09-17 16:05:16.911579 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-17 16:05:16.911594 | orchestrator | Wednesday 17 September 2025 16:04:55 +0000 (0:00:01.227) 0:00:02.104 *** 2025-09-17 16:05:16.911606 | orchestrator | changed: [testbed-manager] 2025-09-17 16:05:16.911617 | orchestrator | 2025-09-17 16:05:16.911629 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:05:16.911641 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:05:16.911653 | orchestrator | 2025-09-17 16:05:16.911665 | orchestrator | 2025-09-17 16:05:16.911676 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:05:16.911688 | orchestrator | Wednesday 17 September 2025 16:04:55 +0000 (0:00:00.587) 0:00:02.691 *** 2025-09-17 16:05:16.911699 | orchestrator | =============================================================================== 2025-09-17 16:05:16.911710 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2025-09-17 16:05:16.911722 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-09-17 16:05:16.911733 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.59s 2025-09-17 16:05:16.911744 | orchestrator | 2025-09-17 16:05:16.911756 | orchestrator | 2025-09-17 16:05:16.911767 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-17 16:05:16.911778 | orchestrator | 2025-09-17 16:05:16.911791 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-17 16:05:16.911802 | orchestrator | Wednesday 17 September 2025 16:04:52 +0000 (0:00:00.153) 0:00:00.153 *** 2025-09-17 16:05:16.911814 | orchestrator | ok: [testbed-manager] 2025-09-17 16:05:16.911826 | orchestrator | 2025-09-17 16:05:16.911838 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-17 16:05:16.911849 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:00.498) 0:00:00.652 *** 2025-09-17 16:05:16.911860 | orchestrator | ok: [testbed-manager] 2025-09-17 16:05:16.911872 | orchestrator | 2025-09-17 16:05:16.911883 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-17 16:05:16.911895 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:00.467) 0:00:01.119 *** 2025-09-17 16:05:16.911906 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-17 16:05:16.911918 | orchestrator | 2025-09-17 16:05:16.911929 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-17 16:05:16.911940 | orchestrator | Wednesday 17 September 2025 16:04:54 +0000 (0:00:00.658) 0:00:01.778 *** 2025-09-17 16:05:16.911952 | orchestrator | changed: [testbed-manager] 2025-09-17 16:05:16.911963 | orchestrator | 2025-09-17 16:05:16.911975 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-17 16:05:16.911986 | orchestrator | Wednesday 17 September 2025 16:04:55 +0000 (0:00:01.269) 0:00:03.047 *** 2025-09-17 16:05:16.911998 | orchestrator | changed: [testbed-manager] 2025-09-17 16:05:16.912009 | orchestrator | 2025-09-17 16:05:16.912021 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-17 16:05:16.912032 | orchestrator | Wednesday 17 September 2025 16:04:56 +0000 (0:00:00.712) 0:00:03.759 *** 2025-09-17 16:05:16.912051 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 16:05:16.912064 | orchestrator | 2025-09-17 16:05:16.912075 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-17 16:05:16.912087 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:01.416) 0:00:05.176 *** 2025-09-17 16:05:16.912195 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 16:05:16.912212 | orchestrator | 2025-09-17 16:05:16.912223 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-17 16:05:16.912234 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:00.712) 0:00:05.888 *** 2025-09-17 16:05:16.912255 | orchestrator | ok: [testbed-manager] 2025-09-17 16:05:16.912266 | orchestrator | 2025-09-17 16:05:16.912277 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-17 16:05:16.912288 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.340) 0:00:06.229 *** 2025-09-17 16:05:16.912299 | orchestrator | ok: [testbed-manager] 2025-09-17 16:05:16.912310 | orchestrator | 2025-09-17 16:05:16.912320 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:05:16.912331 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:05:16.912342 | orchestrator | 2025-09-17 16:05:16.912353 | orchestrator | 2025-09-17 16:05:16.912364 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:05:16.912374 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.270) 0:00:06.499 *** 2025-09-17 16:05:16.912385 | orchestrator | =============================================================================== 2025-09-17 16:05:16.912395 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.42s 2025-09-17 16:05:16.912406 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.27s 2025-09-17 16:05:16.912417 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.71s 2025-09-17 16:05:16.912440 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.71s 2025-09-17 16:05:16.912451 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2025-09-17 16:05:16.912462 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2025-09-17 16:05:16.912473 | orchestrator | Create .kube directory -------------------------------------------------- 0.47s 2025-09-17 16:05:16.912483 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-09-17 16:05:16.912494 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-09-17 16:05:16.912505 | orchestrator | 2025-09-17 16:05:16.912515 | orchestrator | 2025-09-17 16:05:16.912526 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:05:16.912537 | orchestrator | 2025-09-17 16:05:16.912548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:05:16.912558 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.365) 0:00:00.365 *** 2025-09-17 16:05:16.912569 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:05:16.912580 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:05:16.912590 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:05:16.912601 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:05:16.912611 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:05:16.912622 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:05:16.912632 | orchestrator | 2025-09-17 16:05:16.912643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:05:16.912654 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.873) 0:00:01.238 *** 2025-09-17 16:05:16.912664 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912675 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912686 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912697 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912707 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912718 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-17 16:05:16.912729 | orchestrator | 2025-09-17 16:05:16.912739 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-17 16:05:16.912750 | orchestrator | 2025-09-17 16:05:16.912761 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-17 16:05:16.912778 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:01.018) 0:00:02.257 *** 2025-09-17 16:05:16.912790 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:05:16.912802 | orchestrator | 2025-09-17 16:05:16.912812 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 16:05:16.912823 | orchestrator | Wednesday 17 September 2025 16:04:16 +0000 (0:00:01.670) 0:00:03.928 *** 2025-09-17 16:05:16.912836 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-17 16:05:16.912848 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-17 16:05:16.912861 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-17 16:05:16.912874 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-17 16:05:16.912887 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-17 16:05:16.912899 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-17 16:05:16.912911 | orchestrator | 2025-09-17 16:05:16.912922 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 16:05:16.912933 | orchestrator | Wednesday 17 September 2025 16:04:17 +0000 (0:00:01.313) 0:00:05.242 *** 2025-09-17 16:05:16.912943 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-17 16:05:16.912959 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-17 16:05:16.912971 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-17 16:05:16.912981 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-17 16:05:16.912992 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-17 16:05:16.913002 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-17 16:05:16.913013 | orchestrator | 2025-09-17 16:05:16.913024 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 16:05:16.913034 | orchestrator | Wednesday 17 September 2025 16:04:19 +0000 (0:00:01.834) 0:00:07.077 *** 2025-09-17 16:05:16.913045 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-17 16:05:16.913056 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:05:16.913067 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-17 16:05:16.913078 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:05:16.913088 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-17 16:05:16.913132 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:05:16.913144 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-17 16:05:16.913154 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:05:16.913165 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-17 16:05:16.913176 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:05:16.913186 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-17 16:05:16.913197 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:05:16.913207 | orchestrator | 2025-09-17 16:05:16.913218 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-17 16:05:16.913229 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:01.579) 0:00:08.656 *** 2025-09-17 16:05:16.913240 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:05:16.913250 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:05:16.913261 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:05:16.913278 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:05:16.913289 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:05:16.913299 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:05:16.913310 | orchestrator | 2025-09-17 16:05:16.913321 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-17 16:05:16.913332 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:00.882) 0:00:09.539 *** 2025-09-17 16:05:16.913345 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913531 | orchestrator | 2025-09-17 16:05:16.913542 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-17 16:05:16.913553 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:01.581) 0:00:11.121 *** 2025-09-17 16:05:16.913565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913743 | orchestrator | 2025-09-17 16:05:16.913754 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-17 16:05:16.913765 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:03.539) 0:00:14.660 *** 2025-09-17 16:05:16.913776 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:05:16.913787 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:05:16.913798 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:05:16.913808 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:05:16.913819 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:05:16.913830 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:05:16.913840 | orchestrator | 2025-09-17 16:05:16.913851 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-17 16:05:16.913862 | orchestrator | Wednesday 17 September 2025 16:04:28 +0000 (0:00:01.351) 0:00:16.011 *** 2025-09-17 16:05:16.913873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.913978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.914001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.914013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.914205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.914217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-17 16:05:16.914228 | orchestrator | 2025-09-17 16:05:16.914239 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914250 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:02.611) 0:00:18.623 *** 2025-09-17 16:05:16.914261 | orchestrator | 2025-09-17 16:05:16.914272 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914283 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.125) 0:00:18.748 *** 2025-09-17 16:05:16.914293 | orchestrator | 2025-09-17 16:05:16.914304 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914315 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.135) 0:00:18.884 *** 2025-09-17 16:05:16.914334 | orchestrator | 2025-09-17 16:05:16.914350 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914362 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.127) 0:00:19.011 *** 2025-09-17 16:05:16.914372 | orchestrator | 2025-09-17 16:05:16.914383 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914393 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.124) 0:00:19.136 *** 2025-09-17 16:05:16.914404 | orchestrator | 2025-09-17 16:05:16.914414 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-17 16:05:16.914425 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.116) 0:00:19.252 *** 2025-09-17 16:05:16.914436 | orchestrator | 2025-09-17 16:05:16.914446 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-17 16:05:16.914457 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.118) 0:00:19.370 *** 2025-09-17 16:05:16.914467 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:05:16.914477 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:05:16.914487 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:05:16.914497 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:05:16.914506 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:05:16.914515 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:05:16.914525 | orchestrator | 2025-09-17 16:05:16.914534 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-17 16:05:16.914544 | orchestrator | Wednesday 17 September 2025 16:04:42 +0000 (0:00:10.888) 0:00:30.259 *** 2025-09-17 16:05:16.914553 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:05:16.914563 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:05:16.914572 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:05:16.914581 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:05:16.914591 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:05:16.914600 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:05:16.914610 | orchestrator | 2025-09-17 16:05:16.914619 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-17 16:05:16.914636 | orchestrator | Wednesday 17 September 2025 16:04:44 +0000 (0:00:01.732) 0:00:31.992 *** 2025-09-17 16:05:16.914646 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:05:16.914656 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:05:16.914665 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:05:16.914674 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:05:16.914684 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:05:16.914693 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:05:16.914703 | orchestrator | 2025-09-17 16:05:16.914712 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-17 16:05:16.914722 | orchestrator | Wednesday 17 September 2025 16:04:54 +0000 (0:00:09.917) 0:00:41.909 *** 2025-09-17 16:05:16.914731 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-17 16:05:16.914741 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-17 16:05:16.914751 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-17 16:05:16.914760 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-17 16:05:16.914770 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-17 16:05:16.914779 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-17 16:05:16.914789 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-17 16:05:16.914798 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-17 16:05:16.914813 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-17 16:05:16.914822 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-17 16:05:16.914831 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-17 16:05:16.914841 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-17 16:05:16.914850 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914860 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914869 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914878 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914888 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914897 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-17 16:05:16.914907 | orchestrator | 2025-09-17 16:05:16.914916 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-17 16:05:16.914926 | orchestrator | Wednesday 17 September 2025 16:05:02 +0000 (0:00:07.609) 0:00:49.519 *** 2025-09-17 16:05:16.914935 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-17 16:05:16.914945 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:05:16.914954 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-17 16:05:16.914964 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:05:16.914973 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-17 16:05:16.914983 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:05:16.914993 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-17 16:05:16.915002 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-17 16:05:16.915012 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-17 16:05:16.915022 | orchestrator | 2025-09-17 16:05:16.915031 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-17 16:05:16.915041 | orchestrator | Wednesday 17 September 2025 16:05:04 +0000 (0:00:02.531) 0:00:52.051 *** 2025-09-17 16:05:16.915050 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-17 16:05:16.915060 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:05:16.915069 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-17 16:05:16.915079 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:05:16.915088 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-17 16:05:16.915114 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:05:16.915124 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-17 16:05:16.915134 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-17 16:05:16.915143 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-17 16:05:16.915153 | orchestrator | 2025-09-17 16:05:16.915162 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-17 16:05:16.915171 | orchestrator | Wednesday 17 September 2025 16:05:07 +0000 (0:00:03.261) 0:00:55.313 *** 2025-09-17 16:05:16.915181 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:05:16.915190 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:05:16.915205 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:05:16.915215 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:05:16.915735 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:05:16.915748 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:05:16.915764 | orchestrator | 2025-09-17 16:05:16.915774 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:05:16.915784 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:05:16.915798 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:05:16.915808 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:05:16.915818 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:05:16.915827 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:05:16.915837 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:05:16.915846 | orchestrator | 2025-09-17 16:05:16.915856 | orchestrator | 2025-09-17 16:05:16.915865 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:05:16.915875 | orchestrator | Wednesday 17 September 2025 16:05:15 +0000 (0:00:07.880) 0:01:03.194 *** 2025-09-17 16:05:16.915885 | orchestrator | =============================================================================== 2025-09-17 16:05:16.915894 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.80s 2025-09-17 16:05:16.915904 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.89s 2025-09-17 16:05:16.915913 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.61s 2025-09-17 16:05:16.915923 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.54s 2025-09-17 16:05:16.915932 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.26s 2025-09-17 16:05:16.915942 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.61s 2025-09-17 16:05:16.915951 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.53s 2025-09-17 16:05:16.915960 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.83s 2025-09-17 16:05:16.915970 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.73s 2025-09-17 16:05:16.915980 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.67s 2025-09-17 16:05:16.915989 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.58s 2025-09-17 16:05:16.915998 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.58s 2025-09-17 16:05:16.916008 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.35s 2025-09-17 16:05:16.916017 | orchestrator | module-load : Load modules ---------------------------------------------- 1.31s 2025-09-17 16:05:16.916027 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-09-17 16:05:16.916036 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2025-09-17 16:05:16.916046 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-09-17 16:05:16.916055 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.75s 2025-09-17 16:05:16.916065 | orchestrator | 2025-09-17 16:05:16 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:16.916075 | orchestrator | 2025-09-17 16:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:19.944933 | orchestrator | 2025-09-17 16:05:19 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:19.945491 | orchestrator | 2025-09-17 16:05:19 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:19.946239 | orchestrator | 2025-09-17 16:05:19 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:19.947034 | orchestrator | 2025-09-17 16:05:19 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:19.947157 | orchestrator | 2025-09-17 16:05:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:22.971900 | orchestrator | 2025-09-17 16:05:22 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:22.972194 | orchestrator | 2025-09-17 16:05:22 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:22.972701 | orchestrator | 2025-09-17 16:05:22 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:22.973333 | orchestrator | 2025-09-17 16:05:22 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:22.973360 | orchestrator | 2025-09-17 16:05:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:26.009078 | orchestrator | 2025-09-17 16:05:26 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:26.010575 | orchestrator | 2025-09-17 16:05:26 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:26.011963 | orchestrator | 2025-09-17 16:05:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:26.013266 | orchestrator | 2025-09-17 16:05:26 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:26.013580 | orchestrator | 2025-09-17 16:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:29.043355 | orchestrator | 2025-09-17 16:05:29 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:29.043611 | orchestrator | 2025-09-17 16:05:29 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:29.045159 | orchestrator | 2025-09-17 16:05:29 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:29.045787 | orchestrator | 2025-09-17 16:05:29 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:29.045927 | orchestrator | 2025-09-17 16:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:32.079832 | orchestrator | 2025-09-17 16:05:32 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:32.081341 | orchestrator | 2025-09-17 16:05:32 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:32.081833 | orchestrator | 2025-09-17 16:05:32 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:32.083291 | orchestrator | 2025-09-17 16:05:32 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:32.083315 | orchestrator | 2025-09-17 16:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:35.133491 | orchestrator | 2025-09-17 16:05:35 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:35.134153 | orchestrator | 2025-09-17 16:05:35 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:35.135477 | orchestrator | 2025-09-17 16:05:35 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:35.136569 | orchestrator | 2025-09-17 16:05:35 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:35.136670 | orchestrator | 2025-09-17 16:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:38.172306 | orchestrator | 2025-09-17 16:05:38 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:38.172424 | orchestrator | 2025-09-17 16:05:38 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:38.174652 | orchestrator | 2025-09-17 16:05:38 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:38.178446 | orchestrator | 2025-09-17 16:05:38 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:38.178510 | orchestrator | 2025-09-17 16:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:41.208304 | orchestrator | 2025-09-17 16:05:41 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:41.208647 | orchestrator | 2025-09-17 16:05:41 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:41.209281 | orchestrator | 2025-09-17 16:05:41 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:41.209996 | orchestrator | 2025-09-17 16:05:41 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:41.210061 | orchestrator | 2025-09-17 16:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:44.249040 | orchestrator | 2025-09-17 16:05:44 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:44.251114 | orchestrator | 2025-09-17 16:05:44 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:44.252450 | orchestrator | 2025-09-17 16:05:44 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:44.254404 | orchestrator | 2025-09-17 16:05:44 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:44.254428 | orchestrator | 2025-09-17 16:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:47.292677 | orchestrator | 2025-09-17 16:05:47 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:47.292742 | orchestrator | 2025-09-17 16:05:47 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:47.293473 | orchestrator | 2025-09-17 16:05:47 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:47.294075 | orchestrator | 2025-09-17 16:05:47 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:47.295066 | orchestrator | 2025-09-17 16:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:50.345320 | orchestrator | 2025-09-17 16:05:50 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:50.347991 | orchestrator | 2025-09-17 16:05:50 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:50.352407 | orchestrator | 2025-09-17 16:05:50 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:50.359347 | orchestrator | 2025-09-17 16:05:50 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:50.359715 | orchestrator | 2025-09-17 16:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:53.403704 | orchestrator | 2025-09-17 16:05:53 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:53.404626 | orchestrator | 2025-09-17 16:05:53 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:53.406423 | orchestrator | 2025-09-17 16:05:53 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:53.407482 | orchestrator | 2025-09-17 16:05:53 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:53.407497 | orchestrator | 2025-09-17 16:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:56.456884 | orchestrator | 2025-09-17 16:05:56 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:56.457158 | orchestrator | 2025-09-17 16:05:56 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:56.459299 | orchestrator | 2025-09-17 16:05:56 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:56.460847 | orchestrator | 2025-09-17 16:05:56 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:56.462700 | orchestrator | 2025-09-17 16:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:05:59.519499 | orchestrator | 2025-09-17 16:05:59 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:05:59.521333 | orchestrator | 2025-09-17 16:05:59 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:05:59.523692 | orchestrator | 2025-09-17 16:05:59 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:05:59.526298 | orchestrator | 2025-09-17 16:05:59 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:05:59.526341 | orchestrator | 2025-09-17 16:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:02.574200 | orchestrator | 2025-09-17 16:06:02 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:02.575916 | orchestrator | 2025-09-17 16:06:02 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:02.578138 | orchestrator | 2025-09-17 16:06:02 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:02.580361 | orchestrator | 2025-09-17 16:06:02 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:02.580638 | orchestrator | 2025-09-17 16:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:05.622670 | orchestrator | 2025-09-17 16:06:05 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:05.623474 | orchestrator | 2025-09-17 16:06:05 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:05.625072 | orchestrator | 2025-09-17 16:06:05 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:05.626273 | orchestrator | 2025-09-17 16:06:05 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:05.626306 | orchestrator | 2025-09-17 16:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:08.662693 | orchestrator | 2025-09-17 16:06:08 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:08.662785 | orchestrator | 2025-09-17 16:06:08 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:08.662799 | orchestrator | 2025-09-17 16:06:08 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:08.662831 | orchestrator | 2025-09-17 16:06:08 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:08.662843 | orchestrator | 2025-09-17 16:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:11.706843 | orchestrator | 2025-09-17 16:06:11 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:11.708373 | orchestrator | 2025-09-17 16:06:11 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:11.709338 | orchestrator | 2025-09-17 16:06:11 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:11.710353 | orchestrator | 2025-09-17 16:06:11 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:11.710407 | orchestrator | 2025-09-17 16:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:14.749455 | orchestrator | 2025-09-17 16:06:14 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:14.751049 | orchestrator | 2025-09-17 16:06:14 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:14.753301 | orchestrator | 2025-09-17 16:06:14 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:14.754302 | orchestrator | 2025-09-17 16:06:14 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:14.754348 | orchestrator | 2025-09-17 16:06:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:17.808385 | orchestrator | 2025-09-17 16:06:17 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:17.809182 | orchestrator | 2025-09-17 16:06:17 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:17.810350 | orchestrator | 2025-09-17 16:06:17 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:17.811707 | orchestrator | 2025-09-17 16:06:17 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:17.811724 | orchestrator | 2025-09-17 16:06:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:20.860052 | orchestrator | 2025-09-17 16:06:20 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:20.862107 | orchestrator | 2025-09-17 16:06:20 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:20.864572 | orchestrator | 2025-09-17 16:06:20 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:20.865855 | orchestrator | 2025-09-17 16:06:20 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:20.865890 | orchestrator | 2025-09-17 16:06:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:23.904585 | orchestrator | 2025-09-17 16:06:23 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:23.904688 | orchestrator | 2025-09-17 16:06:23 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:23.905328 | orchestrator | 2025-09-17 16:06:23 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:23.907982 | orchestrator | 2025-09-17 16:06:23 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:23.908005 | orchestrator | 2025-09-17 16:06:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:26.940585 | orchestrator | 2025-09-17 16:06:26 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:26.941578 | orchestrator | 2025-09-17 16:06:26 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:26.943002 | orchestrator | 2025-09-17 16:06:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:26.944089 | orchestrator | 2025-09-17 16:06:26 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:26.944348 | orchestrator | 2025-09-17 16:06:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:29.990207 | orchestrator | 2025-09-17 16:06:29 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:29.991980 | orchestrator | 2025-09-17 16:06:29 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:29.994217 | orchestrator | 2025-09-17 16:06:29 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:29.996322 | orchestrator | 2025-09-17 16:06:29 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:29.996387 | orchestrator | 2025-09-17 16:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:33.035745 | orchestrator | 2025-09-17 16:06:33 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:33.035835 | orchestrator | 2025-09-17 16:06:33 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:33.037232 | orchestrator | 2025-09-17 16:06:33 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:33.039671 | orchestrator | 2025-09-17 16:06:33 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:33.039711 | orchestrator | 2025-09-17 16:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:36.073436 | orchestrator | 2025-09-17 16:06:36 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:36.076620 | orchestrator | 2025-09-17 16:06:36 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:36.077382 | orchestrator | 2025-09-17 16:06:36 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:36.078341 | orchestrator | 2025-09-17 16:06:36 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:36.078468 | orchestrator | 2025-09-17 16:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:39.114482 | orchestrator | 2025-09-17 16:06:39 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:39.116974 | orchestrator | 2025-09-17 16:06:39 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:39.118502 | orchestrator | 2025-09-17 16:06:39 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:39.119927 | orchestrator | 2025-09-17 16:06:39 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:39.119958 | orchestrator | 2025-09-17 16:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:42.154723 | orchestrator | 2025-09-17 16:06:42 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:42.156522 | orchestrator | 2025-09-17 16:06:42 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:42.158824 | orchestrator | 2025-09-17 16:06:42 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:42.161357 | orchestrator | 2025-09-17 16:06:42 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:42.161390 | orchestrator | 2025-09-17 16:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:45.290761 | orchestrator | 2025-09-17 16:06:45 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:45.290861 | orchestrator | 2025-09-17 16:06:45 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:45.290876 | orchestrator | 2025-09-17 16:06:45 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:45.290887 | orchestrator | 2025-09-17 16:06:45 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:45.290923 | orchestrator | 2025-09-17 16:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:48.286289 | orchestrator | 2025-09-17 16:06:48 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:48.287602 | orchestrator | 2025-09-17 16:06:48 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:48.291703 | orchestrator | 2025-09-17 16:06:48 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:48.293888 | orchestrator | 2025-09-17 16:06:48 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:48.294096 | orchestrator | 2025-09-17 16:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:51.338223 | orchestrator | 2025-09-17 16:06:51 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:51.343332 | orchestrator | 2025-09-17 16:06:51 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:51.345942 | orchestrator | 2025-09-17 16:06:51 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:51.347948 | orchestrator | 2025-09-17 16:06:51 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:51.348026 | orchestrator | 2025-09-17 16:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:54.391762 | orchestrator | 2025-09-17 16:06:54 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:54.393567 | orchestrator | 2025-09-17 16:06:54 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:54.394412 | orchestrator | 2025-09-17 16:06:54 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:54.395367 | orchestrator | 2025-09-17 16:06:54 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state STARTED 2025-09-17 16:06:54.395390 | orchestrator | 2025-09-17 16:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:57.430755 | orchestrator | 2025-09-17 16:06:57 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:06:57.432554 | orchestrator | 2025-09-17 16:06:57 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:06:57.434258 | orchestrator | 2025-09-17 16:06:57 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:06:57.436233 | orchestrator | 2025-09-17 16:06:57 | INFO  | Task 52a15c7c-ac5b-4cc9-8206-3142c4169e6c is in state SUCCESS 2025-09-17 16:06:57.436268 | orchestrator | 2025-09-17 16:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:06:57.437413 | orchestrator | 2025-09-17 16:06:57.437443 | orchestrator | 2025-09-17 16:06:57.437455 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-17 16:06:57.437467 | orchestrator | 2025-09-17 16:06:57.437478 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-17 16:06:57.437489 | orchestrator | Wednesday 17 September 2025 16:04:34 +0000 (0:00:00.152) 0:00:00.152 *** 2025-09-17 16:06:57.437500 | orchestrator | ok: [localhost] => { 2025-09-17 16:06:57.437513 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-17 16:06:57.437524 | orchestrator | } 2025-09-17 16:06:57.437536 | orchestrator | 2025-09-17 16:06:57.437547 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-17 16:06:57.437558 | orchestrator | Wednesday 17 September 2025 16:04:34 +0000 (0:00:00.063) 0:00:00.215 *** 2025-09-17 16:06:57.437570 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-17 16:06:57.437607 | orchestrator | ...ignoring 2025-09-17 16:06:57.437620 | orchestrator | 2025-09-17 16:06:57.437630 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-17 16:06:57.437641 | orchestrator | Wednesday 17 September 2025 16:04:37 +0000 (0:00:03.162) 0:00:03.378 *** 2025-09-17 16:06:57.437652 | orchestrator | skipping: [localhost] 2025-09-17 16:06:57.437662 | orchestrator | 2025-09-17 16:06:57.437673 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-17 16:06:57.437684 | orchestrator | Wednesday 17 September 2025 16:04:37 +0000 (0:00:00.043) 0:00:03.422 *** 2025-09-17 16:06:57.437694 | orchestrator | ok: [localhost] 2025-09-17 16:06:57.437705 | orchestrator | 2025-09-17 16:06:57.437715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:06:57.437726 | orchestrator | 2025-09-17 16:06:57.437737 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:06:57.437747 | orchestrator | Wednesday 17 September 2025 16:04:37 +0000 (0:00:00.145) 0:00:03.567 *** 2025-09-17 16:06:57.437758 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:06:57.437768 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:06:57.437779 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:06:57.437790 | orchestrator | 2025-09-17 16:06:57.437800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:06:57.437812 | orchestrator | Wednesday 17 September 2025 16:04:38 +0000 (0:00:00.359) 0:00:03.926 *** 2025-09-17 16:06:57.437823 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-17 16:06:57.437834 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-17 16:06:57.437845 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-17 16:06:57.437855 | orchestrator | 2025-09-17 16:06:57.437866 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-17 16:06:57.437877 | orchestrator | 2025-09-17 16:06:57.437888 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 16:06:57.437898 | orchestrator | Wednesday 17 September 2025 16:04:39 +0000 (0:00:01.132) 0:00:05.059 *** 2025-09-17 16:06:57.437909 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:06:57.437920 | orchestrator | 2025-09-17 16:06:57.437930 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-17 16:06:57.437941 | orchestrator | Wednesday 17 September 2025 16:04:39 +0000 (0:00:00.610) 0:00:05.669 *** 2025-09-17 16:06:57.437951 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:06:57.437962 | orchestrator | 2025-09-17 16:06:57.437972 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-17 16:06:57.437983 | orchestrator | Wednesday 17 September 2025 16:04:40 +0000 (0:00:00.864) 0:00:06.533 *** 2025-09-17 16:06:57.437994 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438005 | orchestrator | 2025-09-17 16:06:57.438059 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-17 16:06:57.438076 | orchestrator | Wednesday 17 September 2025 16:04:41 +0000 (0:00:00.354) 0:00:06.888 *** 2025-09-17 16:06:57.438089 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438102 | orchestrator | 2025-09-17 16:06:57.438149 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-17 16:06:57.438164 | orchestrator | Wednesday 17 September 2025 16:04:41 +0000 (0:00:00.274) 0:00:07.163 *** 2025-09-17 16:06:57.438177 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438190 | orchestrator | 2025-09-17 16:06:57.438204 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-17 16:06:57.438217 | orchestrator | Wednesday 17 September 2025 16:04:41 +0000 (0:00:00.378) 0:00:07.541 *** 2025-09-17 16:06:57.438229 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438243 | orchestrator | 2025-09-17 16:06:57.438265 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 16:06:57.438277 | orchestrator | Wednesday 17 September 2025 16:04:42 +0000 (0:00:00.582) 0:00:08.124 *** 2025-09-17 16:06:57.438289 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-1, testbed-node-2, testbed-node-0 2025-09-17 16:06:57.438300 | orchestrator | 2025-09-17 16:06:57.438311 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-17 16:06:57.438323 | orchestrator | Wednesday 17 September 2025 16:04:44 +0000 (0:00:01.798) 0:00:09.922 *** 2025-09-17 16:06:57.438334 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:06:57.438345 | orchestrator | 2025-09-17 16:06:57.438356 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-17 16:06:57.438367 | orchestrator | Wednesday 17 September 2025 16:04:45 +0000 (0:00:00.879) 0:00:10.801 *** 2025-09-17 16:06:57.438379 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438390 | orchestrator | 2025-09-17 16:06:57.438401 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-17 16:06:57.438413 | orchestrator | Wednesday 17 September 2025 16:04:46 +0000 (0:00:01.383) 0:00:12.185 *** 2025-09-17 16:06:57.438424 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.438435 | orchestrator | 2025-09-17 16:06:57.438458 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-17 16:06:57.438471 | orchestrator | Wednesday 17 September 2025 16:04:47 +0000 (0:00:01.042) 0:00:13.227 *** 2025-09-17 16:06:57.438487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438545 | orchestrator | 2025-09-17 16:06:57.438557 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-17 16:06:57.438569 | orchestrator | Wednesday 17 September 2025 16:04:48 +0000 (0:00:01.237) 0:00:14.465 *** 2025-09-17 16:06:57.438592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.438638 | orchestrator | 2025-09-17 16:06:57.438650 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-17 16:06:57.438661 | orchestrator | Wednesday 17 September 2025 16:04:51 +0000 (0:00:02.523) 0:00:16.988 *** 2025-09-17 16:06:57.438677 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 16:06:57.438689 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 16:06:57.438701 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-17 16:06:57.438713 | orchestrator | 2025-09-17 16:06:57.438724 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-17 16:06:57.438736 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:02.358) 0:00:19.347 *** 2025-09-17 16:06:57.438749 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 16:06:57.438768 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 16:06:57.438786 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-17 16:06:57.438805 | orchestrator | 2025-09-17 16:06:57.438824 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-17 16:06:57.438842 | orchestrator | Wednesday 17 September 2025 16:04:56 +0000 (0:00:02.906) 0:00:22.254 *** 2025-09-17 16:06:57.438857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 16:06:57.438868 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 16:06:57.438878 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-17 16:06:57.438889 | orchestrator | 2025-09-17 16:06:57.438907 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-17 16:06:57.438918 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:01.666) 0:00:23.920 *** 2025-09-17 16:06:57.438929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 16:06:57.438939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 16:06:57.438950 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-17 16:06:57.438961 | orchestrator | 2025-09-17 16:06:57.438971 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-17 16:06:57.438982 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:01.685) 0:00:25.605 *** 2025-09-17 16:06:57.438993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 16:06:57.439004 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 16:06:57.439014 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-17 16:06:57.439025 | orchestrator | 2025-09-17 16:06:57.439036 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-17 16:06:57.439047 | orchestrator | Wednesday 17 September 2025 16:05:01 +0000 (0:00:01.384) 0:00:26.990 *** 2025-09-17 16:06:57.439057 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 16:06:57.439068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 16:06:57.439078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-17 16:06:57.439089 | orchestrator | 2025-09-17 16:06:57.439100 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-17 16:06:57.439117 | orchestrator | Wednesday 17 September 2025 16:05:02 +0000 (0:00:01.429) 0:00:28.419 *** 2025-09-17 16:06:57.439145 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.439156 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:06:57.439167 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:06:57.439177 | orchestrator | 2025-09-17 16:06:57.439188 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-17 16:06:57.439199 | orchestrator | Wednesday 17 September 2025 16:05:03 +0000 (0:00:00.545) 0:00:28.965 *** 2025-09-17 16:06:57.439211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.439228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.439250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:06:57.439263 | orchestrator | 2025-09-17 16:06:57.439273 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-17 16:06:57.439284 | orchestrator | Wednesday 17 September 2025 16:05:04 +0000 (0:00:01.112) 0:00:30.077 *** 2025-09-17 16:06:57.439302 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:06:57.439313 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:06:57.439323 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:06:57.439334 | orchestrator | 2025-09-17 16:06:57.439345 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-17 16:06:57.439355 | orchestrator | Wednesday 17 September 2025 16:05:05 +0000 (0:00:00.777) 0:00:30.855 *** 2025-09-17 16:06:57.439366 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:06:57.439377 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:06:57.439387 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:06:57.439398 | orchestrator | 2025-09-17 16:06:57.439408 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-17 16:06:57.439419 | orchestrator | Wednesday 17 September 2025 16:05:12 +0000 (0:00:06.933) 0:00:37.788 *** 2025-09-17 16:06:57.439429 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:06:57.439440 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:06:57.439451 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:06:57.439461 | orchestrator | 2025-09-17 16:06:57.439472 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 16:06:57.439482 | orchestrator | 2025-09-17 16:06:57.439493 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 16:06:57.439504 | orchestrator | Wednesday 17 September 2025 16:05:12 +0000 (0:00:00.304) 0:00:38.093 *** 2025-09-17 16:06:57.439515 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:06:57.439525 | orchestrator | 2025-09-17 16:06:57.439536 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 16:06:57.439546 | orchestrator | Wednesday 17 September 2025 16:05:12 +0000 (0:00:00.620) 0:00:38.714 *** 2025-09-17 16:06:57.439557 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:06:57.439567 | orchestrator | 2025-09-17 16:06:57.439578 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 16:06:57.439589 | orchestrator | Wednesday 17 September 2025 16:05:13 +0000 (0:00:00.215) 0:00:38.929 *** 2025-09-17 16:06:57.439599 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:06:57.439610 | orchestrator | 2025-09-17 16:06:57.439621 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 16:06:57.439631 | orchestrator | Wednesday 17 September 2025 16:05:14 +0000 (0:00:01.611) 0:00:40.541 *** 2025-09-17 16:06:57.439642 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:06:57.439653 | orchestrator | 2025-09-17 16:06:57.439663 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 16:06:57.439674 | orchestrator | 2025-09-17 16:06:57.439685 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 16:06:57.439695 | orchestrator | Wednesday 17 September 2025 16:06:12 +0000 (0:00:57.976) 0:01:38.517 *** 2025-09-17 16:06:57.439710 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:06:57.439721 | orchestrator | 2025-09-17 16:06:57.439732 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 16:06:57.439742 | orchestrator | Wednesday 17 September 2025 16:06:13 +0000 (0:00:00.643) 0:01:39.161 *** 2025-09-17 16:06:57.439753 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:06:57.439764 | orchestrator | 2025-09-17 16:06:57.439774 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 16:06:57.439785 | orchestrator | Wednesday 17 September 2025 16:06:13 +0000 (0:00:00.409) 0:01:39.570 *** 2025-09-17 16:06:57.439796 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:06:57.439806 | orchestrator | 2025-09-17 16:06:57.439817 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 16:06:57.439828 | orchestrator | Wednesday 17 September 2025 16:06:20 +0000 (0:00:06.666) 0:01:46.237 *** 2025-09-17 16:06:57.439838 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:06:57.439849 | orchestrator | 2025-09-17 16:06:57.439860 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-17 16:06:57.439877 | orchestrator | 2025-09-17 16:06:57.439887 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-17 16:06:57.439898 | orchestrator | Wednesday 17 September 2025 16:06:32 +0000 (0:00:11.573) 0:01:57.811 *** 2025-09-17 16:06:57.439909 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:06:57.439919 | orchestrator | 2025-09-17 16:06:57.439930 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-17 16:06:57.439940 | orchestrator | Wednesday 17 September 2025 16:06:32 +0000 (0:00:00.708) 0:01:58.519 *** 2025-09-17 16:06:57.439951 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:06:57.439962 | orchestrator | 2025-09-17 16:06:57.439972 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-17 16:06:57.439989 | orchestrator | Wednesday 17 September 2025 16:06:33 +0000 (0:00:00.473) 0:01:58.992 *** 2025-09-17 16:06:57.440000 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:06:57.440011 | orchestrator | 2025-09-17 16:06:57.440021 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-17 16:06:57.440032 | orchestrator | Wednesday 17 September 2025 16:06:40 +0000 (0:00:06.813) 0:02:05.806 *** 2025-09-17 16:06:57.440043 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:06:57.440053 | orchestrator | 2025-09-17 16:06:57.440064 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-17 16:06:57.440075 | orchestrator | 2025-09-17 16:06:57.440085 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-17 16:06:57.440096 | orchestrator | Wednesday 17 September 2025 16:06:51 +0000 (0:00:11.657) 0:02:17.463 *** 2025-09-17 16:06:57.440107 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:06:57.440117 | orchestrator | 2025-09-17 16:06:57.440155 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-17 16:06:57.440166 | orchestrator | Wednesday 17 September 2025 16:06:52 +0000 (0:00:00.645) 0:02:18.108 *** 2025-09-17 16:06:57.440177 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-17 16:06:57.440188 | orchestrator | enable_outward_rabbitmq_True 2025-09-17 16:06:57.440199 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-17 16:06:57.440209 | orchestrator | outward_rabbitmq_restart 2025-09-17 16:06:57.440220 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:06:57.440231 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:06:57.440241 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:06:57.440252 | orchestrator | 2025-09-17 16:06:57.440262 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-17 16:06:57.440273 | orchestrator | skipping: no hosts matched 2025-09-17 16:06:57.440284 | orchestrator | 2025-09-17 16:06:57.440294 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-17 16:06:57.440305 | orchestrator | skipping: no hosts matched 2025-09-17 16:06:57.440315 | orchestrator | 2025-09-17 16:06:57.440326 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-17 16:06:57.440337 | orchestrator | skipping: no hosts matched 2025-09-17 16:06:57.440347 | orchestrator | 2025-09-17 16:06:57.440358 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:06:57.440369 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-17 16:06:57.440380 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 16:06:57.440391 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:06:57.440402 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:06:57.440413 | orchestrator | 2025-09-17 16:06:57.440430 | orchestrator | 2025-09-17 16:06:57.440441 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:06:57.440451 | orchestrator | Wednesday 17 September 2025 16:06:55 +0000 (0:00:02.746) 0:02:20.855 *** 2025-09-17 16:06:57.440462 | orchestrator | =============================================================================== 2025-09-17 16:06:57.440472 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.21s 2025-09-17 16:06:57.440483 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.09s 2025-09-17 16:06:57.440493 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.93s 2025-09-17 16:06:57.440504 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.16s 2025-09-17 16:06:57.440515 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.91s 2025-09-17 16:06:57.440530 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.75s 2025-09-17 16:06:57.440541 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.52s 2025-09-17 16:06:57.440552 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.36s 2025-09-17 16:06:57.440562 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2025-09-17 16:06:57.440573 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.80s 2025-09-17 16:06:57.440584 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.69s 2025-09-17 16:06:57.440594 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.67s 2025-09-17 16:06:57.440605 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.43s 2025-09-17 16:06:57.440615 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.38s 2025-09-17 16:06:57.440626 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.38s 2025-09-17 16:06:57.440636 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.24s 2025-09-17 16:06:57.440647 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-09-17 16:06:57.440657 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.11s 2025-09-17 16:06:57.440668 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.10s 2025-09-17 16:06:57.440678 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.04s 2025-09-17 16:07:00.471061 | orchestrator | 2025-09-17 16:07:00 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:00.471781 | orchestrator | 2025-09-17 16:07:00 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:00.472991 | orchestrator | 2025-09-17 16:07:00 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:00.473034 | orchestrator | 2025-09-17 16:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:03.521500 | orchestrator | 2025-09-17 16:07:03 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:03.521860 | orchestrator | 2025-09-17 16:07:03 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:03.522813 | orchestrator | 2025-09-17 16:07:03 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:03.522838 | orchestrator | 2025-09-17 16:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:06.568569 | orchestrator | 2025-09-17 16:07:06 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:06.569738 | orchestrator | 2025-09-17 16:07:06 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:06.571947 | orchestrator | 2025-09-17 16:07:06 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:06.572577 | orchestrator | 2025-09-17 16:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:09.609080 | orchestrator | 2025-09-17 16:07:09 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:09.611200 | orchestrator | 2025-09-17 16:07:09 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:09.612119 | orchestrator | 2025-09-17 16:07:09 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:09.612178 | orchestrator | 2025-09-17 16:07:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:12.649356 | orchestrator | 2025-09-17 16:07:12 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:12.651339 | orchestrator | 2025-09-17 16:07:12 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:12.653426 | orchestrator | 2025-09-17 16:07:12 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:12.653451 | orchestrator | 2025-09-17 16:07:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:15.696951 | orchestrator | 2025-09-17 16:07:15 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:15.697073 | orchestrator | 2025-09-17 16:07:15 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:15.697097 | orchestrator | 2025-09-17 16:07:15 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:15.697117 | orchestrator | 2025-09-17 16:07:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:18.720549 | orchestrator | 2025-09-17 16:07:18 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:18.720756 | orchestrator | 2025-09-17 16:07:18 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:18.721430 | orchestrator | 2025-09-17 16:07:18 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:18.722255 | orchestrator | 2025-09-17 16:07:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:21.755529 | orchestrator | 2025-09-17 16:07:21 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:21.755999 | orchestrator | 2025-09-17 16:07:21 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:21.757318 | orchestrator | 2025-09-17 16:07:21 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:21.757344 | orchestrator | 2025-09-17 16:07:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:24.796331 | orchestrator | 2025-09-17 16:07:24 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:24.797289 | orchestrator | 2025-09-17 16:07:24 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:24.798870 | orchestrator | 2025-09-17 16:07:24 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:24.799078 | orchestrator | 2025-09-17 16:07:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:27.845901 | orchestrator | 2025-09-17 16:07:27 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:27.847350 | orchestrator | 2025-09-17 16:07:27 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:27.848533 | orchestrator | 2025-09-17 16:07:27 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:27.848740 | orchestrator | 2025-09-17 16:07:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:30.885956 | orchestrator | 2025-09-17 16:07:30 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:30.888286 | orchestrator | 2025-09-17 16:07:30 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:30.892406 | orchestrator | 2025-09-17 16:07:30 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:30.892448 | orchestrator | 2025-09-17 16:07:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:33.941758 | orchestrator | 2025-09-17 16:07:33 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:33.943200 | orchestrator | 2025-09-17 16:07:33 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:33.945131 | orchestrator | 2025-09-17 16:07:33 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:33.945391 | orchestrator | 2025-09-17 16:07:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:36.991337 | orchestrator | 2025-09-17 16:07:36 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:36.992175 | orchestrator | 2025-09-17 16:07:36 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:36.994280 | orchestrator | 2025-09-17 16:07:36 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:36.994574 | orchestrator | 2025-09-17 16:07:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:40.036602 | orchestrator | 2025-09-17 16:07:40 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:40.037566 | orchestrator | 2025-09-17 16:07:40 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:40.039738 | orchestrator | 2025-09-17 16:07:40 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:40.039881 | orchestrator | 2025-09-17 16:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:43.082215 | orchestrator | 2025-09-17 16:07:43 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:43.082413 | orchestrator | 2025-09-17 16:07:43 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:43.084550 | orchestrator | 2025-09-17 16:07:43 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:43.084761 | orchestrator | 2025-09-17 16:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:46.131671 | orchestrator | 2025-09-17 16:07:46 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:46.132305 | orchestrator | 2025-09-17 16:07:46 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state STARTED 2025-09-17 16:07:46.133939 | orchestrator | 2025-09-17 16:07:46 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:46.134275 | orchestrator | 2025-09-17 16:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:49.193914 | orchestrator | 2025-09-17 16:07:49 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:49.198266 | orchestrator | 2025-09-17 16:07:49 | INFO  | Task b4f8c542-a345-4c54-bc77-172026e3f0a1 is in state SUCCESS 2025-09-17 16:07:49.200932 | orchestrator | 2025-09-17 16:07:49.200997 | orchestrator | 2025-09-17 16:07:49.201010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:07:49.201023 | orchestrator | 2025-09-17 16:07:49.201035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:07:49.201067 | orchestrator | Wednesday 17 September 2025 16:05:20 +0000 (0:00:00.225) 0:00:00.225 *** 2025-09-17 16:07:49.201078 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:07:49.201090 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:07:49.201101 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:07:49.201111 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.201122 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.201132 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.201143 | orchestrator | 2025-09-17 16:07:49.201180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:07:49.201191 | orchestrator | Wednesday 17 September 2025 16:05:21 +0000 (0:00:00.953) 0:00:01.178 *** 2025-09-17 16:07:49.201202 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-17 16:07:49.201213 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-17 16:07:49.201224 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-17 16:07:49.201234 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-17 16:07:49.201245 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-17 16:07:49.201256 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-17 16:07:49.201266 | orchestrator | 2025-09-17 16:07:49.201277 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-17 16:07:49.201288 | orchestrator | 2025-09-17 16:07:49.201299 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-17 16:07:49.201309 | orchestrator | Wednesday 17 September 2025 16:05:22 +0000 (0:00:01.063) 0:00:02.242 *** 2025-09-17 16:07:49.201375 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:07:49.201395 | orchestrator | 2025-09-17 16:07:49.201407 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-17 16:07:49.201417 | orchestrator | Wednesday 17 September 2025 16:05:23 +0000 (0:00:01.025) 0:00:03.267 *** 2025-09-17 16:07:49.201431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201531 | orchestrator | 2025-09-17 16:07:49.201544 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-17 16:07:49.201557 | orchestrator | Wednesday 17 September 2025 16:05:24 +0000 (0:00:00.988) 0:00:04.255 *** 2025-09-17 16:07:49.201570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201648 | orchestrator | 2025-09-17 16:07:49.201661 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-17 16:07:49.201673 | orchestrator | Wednesday 17 September 2025 16:05:25 +0000 (0:00:01.781) 0:00:06.037 *** 2025-09-17 16:07:49.201692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201781 | orchestrator | 2025-09-17 16:07:49.201793 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-17 16:07:49.201806 | orchestrator | Wednesday 17 September 2025 16:05:26 +0000 (0:00:01.016) 0:00:07.054 *** 2025-09-17 16:07:49.201819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201908 | orchestrator | 2025-09-17 16:07:49.201919 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-17 16:07:49.201930 | orchestrator | Wednesday 17 September 2025 16:05:28 +0000 (0:00:01.391) 0:00:08.445 *** 2025-09-17 16:07:49.201940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.201984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.202002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.202012 | orchestrator | 2025-09-17 16:07:49.202193 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-17 16:07:49.202207 | orchestrator | Wednesday 17 September 2025 16:05:29 +0000 (0:00:01.230) 0:00:09.675 *** 2025-09-17 16:07:49.202217 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:07:49.202229 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:07:49.202239 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:07:49.202250 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.202261 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.202271 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.202282 | orchestrator | 2025-09-17 16:07:49.202299 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-17 16:07:49.202310 | orchestrator | Wednesday 17 September 2025 16:05:32 +0000 (0:00:02.721) 0:00:12.397 *** 2025-09-17 16:07:49.202320 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-17 16:07:49.202331 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-17 16:07:49.202342 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-17 16:07:49.202361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-17 16:07:49.202372 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-17 16:07:49.202383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-17 16:07:49.202393 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202404 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202414 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202446 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-17 16:07:49.202457 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202469 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202480 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202491 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202501 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-17 16:07:49.202531 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202543 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202554 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202585 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-17 16:07:49.202596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202606 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202617 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-17 16:07:49.202659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202669 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-17 16:07:49.202723 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 16:07:49.202738 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 16:07:49.202749 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-17 16:07:49.202760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 16:07:49.202776 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 16:07:49.202789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-17 16:07:49.202802 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-17 16:07:49.202815 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-17 16:07:49.202828 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-17 16:07:49.202840 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-17 16:07:49.202859 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-17 16:07:49.202871 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-17 16:07:49.202883 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 16:07:49.202895 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 16:07:49.202907 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-17 16:07:49.202920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 16:07:49.202932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 16:07:49.202945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-17 16:07:49.202957 | orchestrator | 2025-09-17 16:07:49.202969 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.202982 | orchestrator | Wednesday 17 September 2025 16:05:50 +0000 (0:00:18.677) 0:00:31.074 *** 2025-09-17 16:07:49.202995 | orchestrator | 2025-09-17 16:07:49.203007 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.203019 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.231) 0:00:31.306 *** 2025-09-17 16:07:49.203032 | orchestrator | 2025-09-17 16:07:49.203044 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.203056 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.062) 0:00:31.368 *** 2025-09-17 16:07:49.203068 | orchestrator | 2025-09-17 16:07:49.203080 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.203093 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.062) 0:00:31.431 *** 2025-09-17 16:07:49.203105 | orchestrator | 2025-09-17 16:07:49.203118 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.203131 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.061) 0:00:31.493 *** 2025-09-17 16:07:49.203141 | orchestrator | 2025-09-17 16:07:49.203224 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-17 16:07:49.203236 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.069) 0:00:31.562 *** 2025-09-17 16:07:49.203247 | orchestrator | 2025-09-17 16:07:49.203257 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-17 16:07:49.203268 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:00.081) 0:00:31.643 *** 2025-09-17 16:07:49.203278 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:07:49.203289 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:07:49.203298 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:07:49.203308 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203317 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203327 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203336 | orchestrator | 2025-09-17 16:07:49.203346 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-17 16:07:49.203355 | orchestrator | Wednesday 17 September 2025 16:05:53 +0000 (0:00:01.588) 0:00:33.232 *** 2025-09-17 16:07:49.203365 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.203374 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:07:49.203384 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:07:49.203393 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:07:49.203402 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.203412 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.203427 | orchestrator | 2025-09-17 16:07:49.203437 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-17 16:07:49.203446 | orchestrator | 2025-09-17 16:07:49.203456 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 16:07:49.203469 | orchestrator | Wednesday 17 September 2025 16:06:25 +0000 (0:00:32.220) 0:01:05.452 *** 2025-09-17 16:07:49.203479 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:07:49.203489 | orchestrator | 2025-09-17 16:07:49.203498 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 16:07:49.203508 | orchestrator | Wednesday 17 September 2025 16:06:26 +0000 (0:00:00.724) 0:01:06.177 *** 2025-09-17 16:07:49.203517 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:07:49.203526 | orchestrator | 2025-09-17 16:07:49.203542 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-17 16:07:49.203552 | orchestrator | Wednesday 17 September 2025 16:06:26 +0000 (0:00:00.517) 0:01:06.695 *** 2025-09-17 16:07:49.203562 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203571 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203581 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203590 | orchestrator | 2025-09-17 16:07:49.203600 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-17 16:07:49.203609 | orchestrator | Wednesday 17 September 2025 16:06:27 +0000 (0:00:00.986) 0:01:07.682 *** 2025-09-17 16:07:49.203618 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203628 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203637 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203646 | orchestrator | 2025-09-17 16:07:49.203656 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-17 16:07:49.203666 | orchestrator | Wednesday 17 September 2025 16:06:27 +0000 (0:00:00.303) 0:01:07.985 *** 2025-09-17 16:07:49.203675 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203684 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203693 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203703 | orchestrator | 2025-09-17 16:07:49.203712 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-17 16:07:49.203722 | orchestrator | Wednesday 17 September 2025 16:06:28 +0000 (0:00:00.323) 0:01:08.308 *** 2025-09-17 16:07:49.203731 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203741 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203750 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203760 | orchestrator | 2025-09-17 16:07:49.203769 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-17 16:07:49.203779 | orchestrator | Wednesday 17 September 2025 16:06:28 +0000 (0:00:00.288) 0:01:08.597 *** 2025-09-17 16:07:49.203788 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.203797 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.203807 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.203816 | orchestrator | 2025-09-17 16:07:49.203825 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-17 16:07:49.203835 | orchestrator | Wednesday 17 September 2025 16:06:28 +0000 (0:00:00.477) 0:01:09.074 *** 2025-09-17 16:07:49.203844 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.203853 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.203863 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.203872 | orchestrator | 2025-09-17 16:07:49.203881 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-17 16:07:49.203891 | orchestrator | Wednesday 17 September 2025 16:06:29 +0000 (0:00:00.297) 0:01:09.371 *** 2025-09-17 16:07:49.203900 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.203910 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.203919 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.203929 | orchestrator | 2025-09-17 16:07:49.203938 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-17 16:07:49.203957 | orchestrator | Wednesday 17 September 2025 16:06:29 +0000 (0:00:00.292) 0:01:09.664 *** 2025-09-17 16:07:49.203966 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.203976 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.203985 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.203995 | orchestrator | 2025-09-17 16:07:49.204004 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-17 16:07:49.204013 | orchestrator | Wednesday 17 September 2025 16:06:29 +0000 (0:00:00.324) 0:01:09.988 *** 2025-09-17 16:07:49.204023 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204032 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204042 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204051 | orchestrator | 2025-09-17 16:07:49.204060 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-17 16:07:49.204070 | orchestrator | Wednesday 17 September 2025 16:06:30 +0000 (0:00:00.496) 0:01:10.484 *** 2025-09-17 16:07:49.204080 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204089 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204098 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204108 | orchestrator | 2025-09-17 16:07:49.204117 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-17 16:07:49.204127 | orchestrator | Wednesday 17 September 2025 16:06:30 +0000 (0:00:00.299) 0:01:10.784 *** 2025-09-17 16:07:49.204136 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204187 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204199 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204209 | orchestrator | 2025-09-17 16:07:49.204218 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-17 16:07:49.204228 | orchestrator | Wednesday 17 September 2025 16:06:30 +0000 (0:00:00.301) 0:01:11.085 *** 2025-09-17 16:07:49.204237 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204247 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204256 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204265 | orchestrator | 2025-09-17 16:07:49.204275 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-17 16:07:49.204285 | orchestrator | Wednesday 17 September 2025 16:06:31 +0000 (0:00:00.315) 0:01:11.400 *** 2025-09-17 16:07:49.204294 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204304 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204313 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204322 | orchestrator | 2025-09-17 16:07:49.204332 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-17 16:07:49.204346 | orchestrator | Wednesday 17 September 2025 16:06:31 +0000 (0:00:00.585) 0:01:11.986 *** 2025-09-17 16:07:49.204356 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204365 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204374 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204384 | orchestrator | 2025-09-17 16:07:49.204393 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-17 16:07:49.204403 | orchestrator | Wednesday 17 September 2025 16:06:32 +0000 (0:00:00.333) 0:01:12.319 *** 2025-09-17 16:07:49.204412 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204421 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204431 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204440 | orchestrator | 2025-09-17 16:07:49.204455 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-17 16:07:49.204465 | orchestrator | Wednesday 17 September 2025 16:06:32 +0000 (0:00:00.343) 0:01:12.662 *** 2025-09-17 16:07:49.204474 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204484 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204493 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204503 | orchestrator | 2025-09-17 16:07:49.204510 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-17 16:07:49.204523 | orchestrator | Wednesday 17 September 2025 16:06:32 +0000 (0:00:00.355) 0:01:13.018 *** 2025-09-17 16:07:49.204531 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204538 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204546 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204554 | orchestrator | 2025-09-17 16:07:49.204561 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-17 16:07:49.204569 | orchestrator | Wednesday 17 September 2025 16:06:33 +0000 (0:00:00.847) 0:01:13.865 *** 2025-09-17 16:07:49.204577 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:07:49.204585 | orchestrator | 2025-09-17 16:07:49.204593 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-17 16:07:49.204601 | orchestrator | Wednesday 17 September 2025 16:06:34 +0000 (0:00:00.684) 0:01:14.550 *** 2025-09-17 16:07:49.204608 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.204616 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.204624 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.204632 | orchestrator | 2025-09-17 16:07:49.204639 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-17 16:07:49.204647 | orchestrator | Wednesday 17 September 2025 16:06:34 +0000 (0:00:00.460) 0:01:15.011 *** 2025-09-17 16:07:49.204655 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.204662 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.204670 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.204678 | orchestrator | 2025-09-17 16:07:49.204685 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-17 16:07:49.204693 | orchestrator | Wednesday 17 September 2025 16:06:35 +0000 (0:00:00.729) 0:01:15.740 *** 2025-09-17 16:07:49.204701 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204709 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204717 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204724 | orchestrator | 2025-09-17 16:07:49.204732 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-17 16:07:49.204740 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:00.491) 0:01:16.232 *** 2025-09-17 16:07:49.204747 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204755 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204763 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204771 | orchestrator | 2025-09-17 16:07:49.204778 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-17 16:07:49.204786 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:00.424) 0:01:16.656 *** 2025-09-17 16:07:49.204794 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204801 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204809 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204817 | orchestrator | 2025-09-17 16:07:49.204825 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-17 16:07:49.204832 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:00.345) 0:01:17.001 *** 2025-09-17 16:07:49.204840 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204848 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204855 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204863 | orchestrator | 2025-09-17 16:07:49.204871 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-17 16:07:49.204878 | orchestrator | Wednesday 17 September 2025 16:06:37 +0000 (0:00:00.702) 0:01:17.704 *** 2025-09-17 16:07:49.204886 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204894 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204901 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204909 | orchestrator | 2025-09-17 16:07:49.204917 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-17 16:07:49.204925 | orchestrator | Wednesday 17 September 2025 16:06:37 +0000 (0:00:00.322) 0:01:18.026 *** 2025-09-17 16:07:49.204936 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.204944 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.204952 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.204960 | orchestrator | 2025-09-17 16:07:49.204967 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-17 16:07:49.204975 | orchestrator | Wednesday 17 September 2025 16:06:38 +0000 (0:00:00.336) 0:01:18.362 *** 2025-09-17 16:07:49.204984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.204997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205075 | orchestrator | 2025-09-17 16:07:49.205083 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-17 16:07:49.205091 | orchestrator | Wednesday 17 September 2025 16:06:39 +0000 (0:00:01.558) 0:01:19.921 *** 2025-09-17 16:07:49.205099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205196 | orchestrator | 2025-09-17 16:07:49.205204 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-17 16:07:49.205212 | orchestrator | Wednesday 17 September 2025 16:06:43 +0000 (0:00:04.031) 0:01:23.952 *** 2025-09-17 16:07:49.205220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205313 | orchestrator | 2025-09-17 16:07:49.205321 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.205329 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:02.441) 0:01:26.394 *** 2025-09-17 16:07:49.205337 | orchestrator | 2025-09-17 16:07:49.205345 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.205352 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:00.071) 0:01:26.465 *** 2025-09-17 16:07:49.205360 | orchestrator | 2025-09-17 16:07:49.205368 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.205376 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:00.064) 0:01:26.530 *** 2025-09-17 16:07:49.205383 | orchestrator | 2025-09-17 16:07:49.205391 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-17 16:07:49.205399 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:00.067) 0:01:26.598 *** 2025-09-17 16:07:49.205406 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.205414 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.205422 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.205430 | orchestrator | 2025-09-17 16:07:49.205438 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-17 16:07:49.205445 | orchestrator | Wednesday 17 September 2025 16:06:53 +0000 (0:00:07.539) 0:01:34.138 *** 2025-09-17 16:07:49.205453 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.205461 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.205468 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.205476 | orchestrator | 2025-09-17 16:07:49.205484 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-17 16:07:49.205492 | orchestrator | Wednesday 17 September 2025 16:07:01 +0000 (0:00:07.753) 0:01:41.891 *** 2025-09-17 16:07:49.205499 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.205507 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.205515 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.205522 | orchestrator | 2025-09-17 16:07:49.205534 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-17 16:07:49.205542 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:07.430) 0:01:49.322 *** 2025-09-17 16:07:49.205549 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.205557 | orchestrator | 2025-09-17 16:07:49.205565 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-17 16:07:49.205573 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:00.128) 0:01:49.451 *** 2025-09-17 16:07:49.205580 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.205588 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.205596 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.205604 | orchestrator | 2025-09-17 16:07:49.205616 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-17 16:07:49.205624 | orchestrator | Wednesday 17 September 2025 16:07:10 +0000 (0:00:00.785) 0:01:50.236 *** 2025-09-17 16:07:49.205632 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.205639 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.205647 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.205654 | orchestrator | 2025-09-17 16:07:49.205662 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-17 16:07:49.205670 | orchestrator | Wednesday 17 September 2025 16:07:10 +0000 (0:00:00.636) 0:01:50.873 *** 2025-09-17 16:07:49.205678 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.205690 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.205698 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.205705 | orchestrator | 2025-09-17 16:07:49.205713 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-17 16:07:49.205721 | orchestrator | Wednesday 17 September 2025 16:07:11 +0000 (0:00:00.910) 0:01:51.783 *** 2025-09-17 16:07:49.205729 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.205737 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.205744 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.205752 | orchestrator | 2025-09-17 16:07:49.205760 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-17 16:07:49.205767 | orchestrator | Wednesday 17 September 2025 16:07:12 +0000 (0:00:00.624) 0:01:52.407 *** 2025-09-17 16:07:49.205775 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.205783 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.205791 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.205798 | orchestrator | 2025-09-17 16:07:49.205806 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-17 16:07:49.205814 | orchestrator | Wednesday 17 September 2025 16:07:12 +0000 (0:00:00.742) 0:01:53.149 *** 2025-09-17 16:07:49.205821 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.205829 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.205837 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.205844 | orchestrator | 2025-09-17 16:07:49.205852 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-17 16:07:49.205860 | orchestrator | Wednesday 17 September 2025 16:07:13 +0000 (0:00:00.702) 0:01:53.852 *** 2025-09-17 16:07:49.205868 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.205875 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.205883 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.205891 | orchestrator | 2025-09-17 16:07:49.205898 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-17 16:07:49.205906 | orchestrator | Wednesday 17 September 2025 16:07:14 +0000 (0:00:00.382) 0:01:54.234 *** 2025-09-17 16:07:49.205914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205930 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205950 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205976 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.205993 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206000 | orchestrator | 2025-09-17 16:07:49.206008 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-17 16:07:49.206038 | orchestrator | Wednesday 17 September 2025 16:07:15 +0000 (0:00:01.432) 0:01:55.667 *** 2025-09-17 16:07:49.206048 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206056 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206064 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206072 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206114 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206138 | orchestrator | 2025-09-17 16:07:49.206160 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-17 16:07:49.206168 | orchestrator | Wednesday 17 September 2025 16:07:19 +0000 (0:00:04.191) 0:01:59.859 *** 2025-09-17 16:07:49.206176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206192 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206209 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:07:49.206264 | orchestrator | 2025-09-17 16:07:49.206272 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.206280 | orchestrator | Wednesday 17 September 2025 16:07:22 +0000 (0:00:02.859) 0:02:02.718 *** 2025-09-17 16:07:49.206288 | orchestrator | 2025-09-17 16:07:49.206296 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.206304 | orchestrator | Wednesday 17 September 2025 16:07:22 +0000 (0:00:00.059) 0:02:02.777 *** 2025-09-17 16:07:49.206311 | orchestrator | 2025-09-17 16:07:49.206319 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-17 16:07:49.206327 | orchestrator | Wednesday 17 September 2025 16:07:22 +0000 (0:00:00.187) 0:02:02.965 *** 2025-09-17 16:07:49.206335 | orchestrator | 2025-09-17 16:07:49.206342 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-17 16:07:49.206350 | orchestrator | Wednesday 17 September 2025 16:07:22 +0000 (0:00:00.058) 0:02:03.024 *** 2025-09-17 16:07:49.206358 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.206366 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.206374 | orchestrator | 2025-09-17 16:07:49.206381 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-17 16:07:49.206389 | orchestrator | Wednesday 17 September 2025 16:07:28 +0000 (0:00:06.140) 0:02:09.165 *** 2025-09-17 16:07:49.206397 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.206405 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.206412 | orchestrator | 2025-09-17 16:07:49.206420 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-17 16:07:49.206428 | orchestrator | Wednesday 17 September 2025 16:07:35 +0000 (0:00:06.211) 0:02:15.376 *** 2025-09-17 16:07:49.206436 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:07:49.206444 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:07:49.206451 | orchestrator | 2025-09-17 16:07:49.206459 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-17 16:07:49.206471 | orchestrator | Wednesday 17 September 2025 16:07:41 +0000 (0:00:06.444) 0:02:21.821 *** 2025-09-17 16:07:49.206479 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:07:49.206487 | orchestrator | 2025-09-17 16:07:49.206495 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-17 16:07:49.206503 | orchestrator | Wednesday 17 September 2025 16:07:41 +0000 (0:00:00.142) 0:02:21.963 *** 2025-09-17 16:07:49.206510 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.206518 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.206526 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.206533 | orchestrator | 2025-09-17 16:07:49.206541 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-17 16:07:49.206549 | orchestrator | Wednesday 17 September 2025 16:07:42 +0000 (0:00:00.749) 0:02:22.713 *** 2025-09-17 16:07:49.206557 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.206564 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.206572 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.206580 | orchestrator | 2025-09-17 16:07:49.206588 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-17 16:07:49.206595 | orchestrator | Wednesday 17 September 2025 16:07:43 +0000 (0:00:00.647) 0:02:23.360 *** 2025-09-17 16:07:49.206603 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.206611 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.206619 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.206626 | orchestrator | 2025-09-17 16:07:49.206634 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-17 16:07:49.206642 | orchestrator | Wednesday 17 September 2025 16:07:43 +0000 (0:00:00.775) 0:02:24.136 *** 2025-09-17 16:07:49.206649 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:07:49.206657 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:07:49.206665 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:07:49.206673 | orchestrator | 2025-09-17 16:07:49.206680 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-17 16:07:49.206688 | orchestrator | Wednesday 17 September 2025 16:07:44 +0000 (0:00:00.629) 0:02:24.766 *** 2025-09-17 16:07:49.206696 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.206704 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.206711 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.206719 | orchestrator | 2025-09-17 16:07:49.206727 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-17 16:07:49.206735 | orchestrator | Wednesday 17 September 2025 16:07:45 +0000 (0:00:00.739) 0:02:25.505 *** 2025-09-17 16:07:49.206743 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:07:49.206751 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:07:49.206759 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:07:49.206766 | orchestrator | 2025-09-17 16:07:49.206774 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:07:49.206782 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-17 16:07:49.206791 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-17 16:07:49.206804 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-17 16:07:49.206812 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:07:49.206820 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:07:49.206828 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:07:49.206840 | orchestrator | 2025-09-17 16:07:49.206848 | orchestrator | 2025-09-17 16:07:49.206856 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:07:49.206864 | orchestrator | Wednesday 17 September 2025 16:07:46 +0000 (0:00:01.154) 0:02:26.659 *** 2025-09-17 16:07:49.206872 | orchestrator | =============================================================================== 2025-09-17 16:07:49.206879 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.22s 2025-09-17 16:07:49.206887 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.68s 2025-09-17 16:07:49.206895 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.96s 2025-09-17 16:07:49.206902 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.88s 2025-09-17 16:07:49.206910 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.68s 2025-09-17 16:07:49.206918 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.19s 2025-09-17 16:07:49.206926 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.03s 2025-09-17 16:07:49.206933 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.86s 2025-09-17 16:07:49.206941 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.72s 2025-09-17 16:07:49.206949 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.44s 2025-09-17 16:07:49.206956 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.78s 2025-09-17 16:07:49.206964 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.59s 2025-09-17 16:07:49.206972 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2025-09-17 16:07:49.206980 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2025-09-17 16:07:49.206988 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.39s 2025-09-17 16:07:49.207028 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.23s 2025-09-17 16:07:49.207038 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.15s 2025-09-17 16:07:49.207046 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-09-17 16:07:49.207054 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.03s 2025-09-17 16:07:49.207063 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.02s 2025-09-17 16:07:49.207071 | orchestrator | 2025-09-17 16:07:49 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:49.207079 | orchestrator | 2025-09-17 16:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:52.236836 | orchestrator | 2025-09-17 16:07:52 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:52.239294 | orchestrator | 2025-09-17 16:07:52 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:52.239426 | orchestrator | 2025-09-17 16:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:55.281909 | orchestrator | 2025-09-17 16:07:55 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:55.283953 | orchestrator | 2025-09-17 16:07:55 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:55.285000 | orchestrator | 2025-09-17 16:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:07:58.334474 | orchestrator | 2025-09-17 16:07:58 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:07:58.335881 | orchestrator | 2025-09-17 16:07:58 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:07:58.336261 | orchestrator | 2025-09-17 16:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:01.376952 | orchestrator | 2025-09-17 16:08:01 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:01.377391 | orchestrator | 2025-09-17 16:08:01 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:01.377420 | orchestrator | 2025-09-17 16:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:04.418214 | orchestrator | 2025-09-17 16:08:04 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:04.419581 | orchestrator | 2025-09-17 16:08:04 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:04.420026 | orchestrator | 2025-09-17 16:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:07.472880 | orchestrator | 2025-09-17 16:08:07 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:07.473060 | orchestrator | 2025-09-17 16:08:07 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:07.473091 | orchestrator | 2025-09-17 16:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:10.514632 | orchestrator | 2025-09-17 16:08:10 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:10.517731 | orchestrator | 2025-09-17 16:08:10 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:10.518626 | orchestrator | 2025-09-17 16:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:13.561636 | orchestrator | 2025-09-17 16:08:13 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:13.562697 | orchestrator | 2025-09-17 16:08:13 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:13.562881 | orchestrator | 2025-09-17 16:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:16.596876 | orchestrator | 2025-09-17 16:08:16 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:16.598702 | orchestrator | 2025-09-17 16:08:16 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:16.598741 | orchestrator | 2025-09-17 16:08:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:19.628480 | orchestrator | 2025-09-17 16:08:19 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:19.628882 | orchestrator | 2025-09-17 16:08:19 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:19.629492 | orchestrator | 2025-09-17 16:08:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:22.664149 | orchestrator | 2025-09-17 16:08:22 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:22.664906 | orchestrator | 2025-09-17 16:08:22 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:22.664935 | orchestrator | 2025-09-17 16:08:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:25.699597 | orchestrator | 2025-09-17 16:08:25 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:25.699684 | orchestrator | 2025-09-17 16:08:25 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:25.699699 | orchestrator | 2025-09-17 16:08:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:28.735364 | orchestrator | 2025-09-17 16:08:28 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:28.736914 | orchestrator | 2025-09-17 16:08:28 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:28.737265 | orchestrator | 2025-09-17 16:08:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:31.781749 | orchestrator | 2025-09-17 16:08:31 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:31.782363 | orchestrator | 2025-09-17 16:08:31 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:31.782584 | orchestrator | 2025-09-17 16:08:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:34.814769 | orchestrator | 2025-09-17 16:08:34 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:34.817249 | orchestrator | 2025-09-17 16:08:34 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:34.817443 | orchestrator | 2025-09-17 16:08:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:37.856022 | orchestrator | 2025-09-17 16:08:37 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:37.857048 | orchestrator | 2025-09-17 16:08:37 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:37.857544 | orchestrator | 2025-09-17 16:08:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:40.901373 | orchestrator | 2025-09-17 16:08:40 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:40.903030 | orchestrator | 2025-09-17 16:08:40 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:40.903060 | orchestrator | 2025-09-17 16:08:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:43.940752 | orchestrator | 2025-09-17 16:08:43 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:43.942135 | orchestrator | 2025-09-17 16:08:43 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:43.942200 | orchestrator | 2025-09-17 16:08:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:46.990707 | orchestrator | 2025-09-17 16:08:46 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:46.992097 | orchestrator | 2025-09-17 16:08:46 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:46.992126 | orchestrator | 2025-09-17 16:08:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:50.037417 | orchestrator | 2025-09-17 16:08:50 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:50.040813 | orchestrator | 2025-09-17 16:08:50 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:50.040865 | orchestrator | 2025-09-17 16:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:53.096004 | orchestrator | 2025-09-17 16:08:53 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:53.096398 | orchestrator | 2025-09-17 16:08:53 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:53.096441 | orchestrator | 2025-09-17 16:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:56.138704 | orchestrator | 2025-09-17 16:08:56 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:56.139347 | orchestrator | 2025-09-17 16:08:56 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:56.139680 | orchestrator | 2025-09-17 16:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:08:59.185480 | orchestrator | 2025-09-17 16:08:59 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:08:59.185760 | orchestrator | 2025-09-17 16:08:59 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:08:59.185794 | orchestrator | 2025-09-17 16:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:02.235767 | orchestrator | 2025-09-17 16:09:02 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:02.238196 | orchestrator | 2025-09-17 16:09:02 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:02.238341 | orchestrator | 2025-09-17 16:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:05.280719 | orchestrator | 2025-09-17 16:09:05 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:05.282095 | orchestrator | 2025-09-17 16:09:05 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:05.282607 | orchestrator | 2025-09-17 16:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:08.329098 | orchestrator | 2025-09-17 16:09:08 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:08.330641 | orchestrator | 2025-09-17 16:09:08 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:08.331247 | orchestrator | 2025-09-17 16:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:11.381867 | orchestrator | 2025-09-17 16:09:11 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:11.383374 | orchestrator | 2025-09-17 16:09:11 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:11.383660 | orchestrator | 2025-09-17 16:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:14.429369 | orchestrator | 2025-09-17 16:09:14 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:14.430505 | orchestrator | 2025-09-17 16:09:14 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:14.430776 | orchestrator | 2025-09-17 16:09:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:17.465340 | orchestrator | 2025-09-17 16:09:17 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:17.470135 | orchestrator | 2025-09-17 16:09:17 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:17.470166 | orchestrator | 2025-09-17 16:09:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:20.518168 | orchestrator | 2025-09-17 16:09:20 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:20.518399 | orchestrator | 2025-09-17 16:09:20 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:20.518732 | orchestrator | 2025-09-17 16:09:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:23.562732 | orchestrator | 2025-09-17 16:09:23 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:23.566436 | orchestrator | 2025-09-17 16:09:23 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:23.566470 | orchestrator | 2025-09-17 16:09:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:26.604460 | orchestrator | 2025-09-17 16:09:26 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:26.605727 | orchestrator | 2025-09-17 16:09:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:26.605759 | orchestrator | 2025-09-17 16:09:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:29.641590 | orchestrator | 2025-09-17 16:09:29 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:29.642302 | orchestrator | 2025-09-17 16:09:29 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:29.642338 | orchestrator | 2025-09-17 16:09:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:32.680304 | orchestrator | 2025-09-17 16:09:32 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:32.681714 | orchestrator | 2025-09-17 16:09:32 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:32.681820 | orchestrator | 2025-09-17 16:09:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:35.729989 | orchestrator | 2025-09-17 16:09:35 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:35.731143 | orchestrator | 2025-09-17 16:09:35 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:35.731310 | orchestrator | 2025-09-17 16:09:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:38.773348 | orchestrator | 2025-09-17 16:09:38 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:38.774780 | orchestrator | 2025-09-17 16:09:38 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:38.774962 | orchestrator | 2025-09-17 16:09:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:41.823839 | orchestrator | 2025-09-17 16:09:41 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:41.825154 | orchestrator | 2025-09-17 16:09:41 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:41.825672 | orchestrator | 2025-09-17 16:09:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:44.868131 | orchestrator | 2025-09-17 16:09:44 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:44.869433 | orchestrator | 2025-09-17 16:09:44 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:44.869549 | orchestrator | 2025-09-17 16:09:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:47.914392 | orchestrator | 2025-09-17 16:09:47 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:47.914474 | orchestrator | 2025-09-17 16:09:47 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:47.914483 | orchestrator | 2025-09-17 16:09:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:50.954600 | orchestrator | 2025-09-17 16:09:50 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:50.956058 | orchestrator | 2025-09-17 16:09:50 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:50.956089 | orchestrator | 2025-09-17 16:09:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:53.989973 | orchestrator | 2025-09-17 16:09:53 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:53.991463 | orchestrator | 2025-09-17 16:09:53 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:53.991929 | orchestrator | 2025-09-17 16:09:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:09:57.039489 | orchestrator | 2025-09-17 16:09:57 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:09:57.039864 | orchestrator | 2025-09-17 16:09:57 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:09:57.039920 | orchestrator | 2025-09-17 16:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:00.087058 | orchestrator | 2025-09-17 16:10:00 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:10:00.087211 | orchestrator | 2025-09-17 16:10:00 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:00.087239 | orchestrator | 2025-09-17 16:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:03.127730 | orchestrator | 2025-09-17 16:10:03 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:10:03.130131 | orchestrator | 2025-09-17 16:10:03 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:03.130406 | orchestrator | 2025-09-17 16:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:06.166631 | orchestrator | 2025-09-17 16:10:06 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:10:06.168843 | orchestrator | 2025-09-17 16:10:06 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:06.168891 | orchestrator | 2025-09-17 16:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:09.204166 | orchestrator | 2025-09-17 16:10:09 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:10:09.205470 | orchestrator | 2025-09-17 16:10:09 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:09.205768 | orchestrator | 2025-09-17 16:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:12.234584 | orchestrator | 2025-09-17 16:10:12 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state STARTED 2025-09-17 16:10:12.234958 | orchestrator | 2025-09-17 16:10:12 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:12.235258 | orchestrator | 2025-09-17 16:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:15.287648 | orchestrator | 2025-09-17 16:10:15 | INFO  | Task ee6b2eba-f246-4a29-b223-a072fae36a18 is in state SUCCESS 2025-09-17 16:10:15.290355 | orchestrator | 2025-09-17 16:10:15.290398 | orchestrator | 2025-09-17 16:10:15.290411 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:10:15.290424 | orchestrator | 2025-09-17 16:10:15.290465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:10:15.290538 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.452) 0:00:00.452 *** 2025-09-17 16:10:15.290552 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.290564 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.290575 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.290586 | orchestrator | 2025-09-17 16:10:15.290597 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:10:15.290656 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.875) 0:00:01.327 *** 2025-09-17 16:10:15.290669 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-17 16:10:15.290755 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-17 16:10:15.290769 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-17 16:10:15.290780 | orchestrator | 2025-09-17 16:10:15.290791 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-17 16:10:15.290802 | orchestrator | 2025-09-17 16:10:15.290813 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-17 16:10:15.290824 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:00.581) 0:00:01.909 *** 2025-09-17 16:10:15.290834 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.290846 | orchestrator | 2025-09-17 16:10:15.290883 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-17 16:10:15.290894 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:00.852) 0:00:02.762 *** 2025-09-17 16:10:15.290905 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.290915 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.290926 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.290936 | orchestrator | 2025-09-17 16:10:15.290947 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-17 16:10:15.290971 | orchestrator | Wednesday 17 September 2025 16:04:15 +0000 (0:00:01.000) 0:00:03.762 *** 2025-09-17 16:10:15.290982 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.290993 | orchestrator | 2025-09-17 16:10:15.291004 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-17 16:10:15.291014 | orchestrator | Wednesday 17 September 2025 16:04:17 +0000 (0:00:01.057) 0:00:04.820 *** 2025-09-17 16:10:15.291025 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.291035 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.291085 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.291098 | orchestrator | 2025-09-17 16:10:15.291109 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-17 16:10:15.291119 | orchestrator | Wednesday 17 September 2025 16:04:17 +0000 (0:00:00.709) 0:00:05.530 *** 2025-09-17 16:10:15.291130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291173 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 16:10:15.291207 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 16:10:15.291230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-17 16:10:15.291240 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 16:10:15.291251 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-17 16:10:15.291261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 16:10:15.291271 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-17 16:10:15.291383 | orchestrator | 2025-09-17 16:10:15.291395 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 16:10:15.291406 | orchestrator | Wednesday 17 September 2025 16:04:21 +0000 (0:00:03.634) 0:00:09.164 *** 2025-09-17 16:10:15.291417 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-17 16:10:15.291454 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-17 16:10:15.291466 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-17 16:10:15.291476 | orchestrator | 2025-09-17 16:10:15.291487 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 16:10:15.291498 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:00.829) 0:00:09.993 *** 2025-09-17 16:10:15.291509 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-17 16:10:15.291519 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-17 16:10:15.291530 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-17 16:10:15.291540 | orchestrator | 2025-09-17 16:10:15.291551 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 16:10:15.291571 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:01.236) 0:00:11.230 *** 2025-09-17 16:10:15.291582 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-17 16:10:15.291594 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.291618 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-17 16:10:15.291668 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.291680 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-17 16:10:15.291691 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.291701 | orchestrator | 2025-09-17 16:10:15.291712 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-17 16:10:15.291723 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:01.096) 0:00:12.326 *** 2025-09-17 16:10:15.291737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.291915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.291929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.291946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.291958 | orchestrator | 2025-09-17 16:10:15.291969 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-17 16:10:15.291980 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:02.136) 0:00:14.463 *** 2025-09-17 16:10:15.291990 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.292001 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.292012 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.292022 | orchestrator | 2025-09-17 16:10:15.292033 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-17 16:10:15.292044 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.845) 0:00:15.308 *** 2025-09-17 16:10:15.292054 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-17 16:10:15.292065 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-17 16:10:15.292076 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-17 16:10:15.292086 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-17 16:10:15.292097 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-17 16:10:15.292108 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-17 16:10:15.292118 | orchestrator | 2025-09-17 16:10:15.292129 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-17 16:10:15.292140 | orchestrator | Wednesday 17 September 2025 16:04:30 +0000 (0:00:03.039) 0:00:18.348 *** 2025-09-17 16:10:15.292150 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.292161 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.292245 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.292267 | orchestrator | 2025-09-17 16:10:15.292278 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-17 16:10:15.292289 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:01.412) 0:00:19.760 *** 2025-09-17 16:10:15.292300 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.292310 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.292321 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.292331 | orchestrator | 2025-09-17 16:10:15.292342 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-17 16:10:15.292353 | orchestrator | Wednesday 17 September 2025 16:04:33 +0000 (0:00:01.975) 0:00:21.736 *** 2025-09-17 16:10:15.292364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.292386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.292398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292429 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.292440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.292458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.292470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292553 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.292590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.292602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.292618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292652 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.292663 | orchestrator | 2025-09-17 16:10:15.292674 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-17 16:10:15.292684 | orchestrator | Wednesday 17 September 2025 16:04:34 +0000 (0:00:00.588) 0:00:22.324 *** 2025-09-17 16:10:15.292696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.292907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00', '__omit_place_holder__a92df81b58336f8db65b44c3bd2c539f5e252c00'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-17 16:10:15.292932 | orchestrator | 2025-09-17 16:10:15.292943 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-17 16:10:15.292954 | orchestrator | Wednesday 17 September 2025 16:04:37 +0000 (0:00:03.368) 0:00:25.692 *** 2025-09-17 16:10:15.292965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.292997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.293009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.293026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.293045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.293057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.293068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.293079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.293090 | orchestrator | 2025-09-17 16:10:15.293129 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-17 16:10:15.293142 | orchestrator | Wednesday 17 September 2025 16:04:42 +0000 (0:00:04.230) 0:00:29.923 *** 2025-09-17 16:10:15.293152 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 16:10:15.296721 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 16:10:15.296819 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-17 16:10:15.296832 | orchestrator | 2025-09-17 16:10:15.296842 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-17 16:10:15.296852 | orchestrator | Wednesday 17 September 2025 16:04:45 +0000 (0:00:03.531) 0:00:33.455 *** 2025-09-17 16:10:15.296862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 16:10:15.296872 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 16:10:15.296881 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-17 16:10:15.296891 | orchestrator | 2025-09-17 16:10:15.296901 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-17 16:10:15.296910 | orchestrator | Wednesday 17 September 2025 16:04:51 +0000 (0:00:06.256) 0:00:39.711 *** 2025-09-17 16:10:15.296933 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.296943 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.296953 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.296962 | orchestrator | 2025-09-17 16:10:15.296972 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-17 16:10:15.296981 | orchestrator | Wednesday 17 September 2025 16:04:52 +0000 (0:00:00.970) 0:00:40.682 *** 2025-09-17 16:10:15.296991 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 16:10:15.297002 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 16:10:15.297019 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-17 16:10:15.297114 | orchestrator | 2025-09-17 16:10:15.297124 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-17 16:10:15.297134 | orchestrator | Wednesday 17 September 2025 16:04:56 +0000 (0:00:03.863) 0:00:44.545 *** 2025-09-17 16:10:15.297143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 16:10:15.297153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 16:10:15.297162 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-17 16:10:15.297172 | orchestrator | 2025-09-17 16:10:15.297181 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-17 16:10:15.297257 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:02.426) 0:00:46.972 *** 2025-09-17 16:10:15.297267 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-17 16:10:15.297277 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-17 16:10:15.297287 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-17 16:10:15.297298 | orchestrator | 2025-09-17 16:10:15.297308 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-17 16:10:15.297319 | orchestrator | Wednesday 17 September 2025 16:05:00 +0000 (0:00:01.443) 0:00:48.415 *** 2025-09-17 16:10:15.297329 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-17 16:10:15.297339 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-17 16:10:15.297350 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-17 16:10:15.297360 | orchestrator | 2025-09-17 16:10:15.297371 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-17 16:10:15.297383 | orchestrator | Wednesday 17 September 2025 16:05:01 +0000 (0:00:01.380) 0:00:49.796 *** 2025-09-17 16:10:15.297393 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.297403 | orchestrator | 2025-09-17 16:10:15.297415 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-17 16:10:15.297426 | orchestrator | Wednesday 17 September 2025 16:05:02 +0000 (0:00:00.607) 0:00:50.403 *** 2025-09-17 16:10:15.297439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.297620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.297630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.297682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.297729 | orchestrator | 2025-09-17 16:10:15.297738 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-17 16:10:15.297746 | orchestrator | Wednesday 17 September 2025 16:05:05 +0000 (0:00:03.350) 0:00:53.753 *** 2025-09-17 16:10:15.297754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297782 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.297790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297827 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.297835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297862 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.297870 | orchestrator | 2025-09-17 16:10:15.297878 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-17 16:10:15.297886 | orchestrator | Wednesday 17 September 2025 16:05:06 +0000 (0:00:00.537) 0:00:54.291 *** 2025-09-17 16:10:15.297894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297928 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.297936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297961 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.297969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.297977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.297989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.297997 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298005 | orchestrator | 2025-09-17 16:10:15.298053 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-17 16:10:15.298063 | orchestrator | Wednesday 17 September 2025 16:05:07 +0000 (0:00:01.089) 0:00:55.380 *** 2025-09-17 16:10:15.298097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298127 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.298161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298208 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.298233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298257 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298265 | orchestrator | 2025-09-17 16:10:15.298273 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-17 16:10:15.298369 | orchestrator | Wednesday 17 September 2025 16:05:08 +0000 (0:00:00.813) 0:00:56.194 *** 2025-09-17 16:10:15.298378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298408 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.298416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298447 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.298458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298488 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298495 | orchestrator | 2025-09-17 16:10:15.298503 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-17 16:10:15.298511 | orchestrator | Wednesday 17 September 2025 16:05:09 +0000 (0:00:01.391) 0:00:57.585 *** 2025-09-17 16:10:15.298518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298549 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.298561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298590 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.298598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298626 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298634 | orchestrator | 2025-09-17 16:10:15.298642 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-17 16:10:15.298649 | orchestrator | Wednesday 17 September 2025 16:05:10 +0000 (0:00:00.876) 0:00:58.462 *** 2025-09-17 16:10:15.298657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298716 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.298724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298755 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.298763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298796 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298803 | orchestrator | 2025-09-17 16:10:15.298833 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-17 16:10:15.298842 | orchestrator | Wednesday 17 September 2025 16:05:11 +0000 (0:00:00.503) 0:00:58.965 *** 2025-09-17 16:10:15.298849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298880 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.298888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298920 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.298928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.298936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.298944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.298952 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.298960 | orchestrator | 2025-09-17 16:10:15.298968 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-17 16:10:15.298980 | orchestrator | Wednesday 17 September 2025 16:05:11 +0000 (0:00:00.522) 0:00:59.488 *** 2025-09-17 16:10:15.298988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.299053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.299067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.299075 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.299083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.299091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.299099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.299107 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.299120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-17 16:10:15.299133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-17 16:10:15.299145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-17 16:10:15.299153 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.299161 | orchestrator | 2025-09-17 16:10:15.299169 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-17 16:10:15.299176 | orchestrator | Wednesday 17 September 2025 16:05:12 +0000 (0:00:00.839) 0:01:00.328 *** 2025-09-17 16:10:15.299201 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 16:10:15.299210 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 16:10:15.299218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-17 16:10:15.299225 | orchestrator | 2025-09-17 16:10:15.299233 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-17 16:10:15.299241 | orchestrator | Wednesday 17 September 2025 16:05:13 +0000 (0:00:01.317) 0:01:01.645 *** 2025-09-17 16:10:15.299248 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 16:10:15.299288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 16:10:15.299296 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-17 16:10:15.299304 | orchestrator | 2025-09-17 16:10:15.299311 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-17 16:10:15.299319 | orchestrator | Wednesday 17 September 2025 16:05:15 +0000 (0:00:01.288) 0:01:02.934 *** 2025-09-17 16:10:15.299326 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:10:15.299334 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:10:15.299342 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:10:15.299349 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:10:15.299357 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.299365 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:10:15.299372 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.299380 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:10:15.299387 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.299395 | orchestrator | 2025-09-17 16:10:15.299403 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-17 16:10:15.299415 | orchestrator | Wednesday 17 September 2025 16:05:15 +0000 (0:00:00.841) 0:01:03.776 *** 2025-09-17 16:10:15.299428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-17 16:10:15.299486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.299499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.299508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-17 16:10:15.299516 | orchestrator | 2025-09-17 16:10:15.299524 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-17 16:10:15.299531 | orchestrator | Wednesday 17 September 2025 16:05:18 +0000 (0:00:02.618) 0:01:06.394 *** 2025-09-17 16:10:15.299539 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.299547 | orchestrator | 2025-09-17 16:10:15.299580 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-17 16:10:15.299588 | orchestrator | Wednesday 17 September 2025 16:05:19 +0000 (0:00:00.531) 0:01:06.926 *** 2025-09-17 16:10:15.299601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 16:10:15.299611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 16:10:15.299652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-17 16:10:15.299759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299788 | orchestrator | 2025-09-17 16:10:15.299796 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-17 16:10:15.299804 | orchestrator | Wednesday 17 September 2025 16:05:23 +0000 (0:00:04.318) 0:01:11.245 *** 2025-09-17 16:10:15.299816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 16:10:15.299824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299853 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.299866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 16:10:15.299875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299925 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.299934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-17 16:10:15.299942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.299955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.299971 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.299979 | orchestrator | 2025-09-17 16:10:15.299987 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-17 16:10:15.299995 | orchestrator | Wednesday 17 September 2025 16:05:24 +0000 (0:00:00.596) 0:01:11.842 *** 2025-09-17 16:10:15.300003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300024 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.300032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300053 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.300061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-17 16:10:15.300076 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.300084 | orchestrator | 2025-09-17 16:10:15.300091 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-17 16:10:15.300099 | orchestrator | Wednesday 17 September 2025 16:05:25 +0000 (0:00:01.002) 0:01:12.844 *** 2025-09-17 16:10:15.300107 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.300114 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.300133 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.300140 | orchestrator | 2025-09-17 16:10:15.300148 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-17 16:10:15.300156 | orchestrator | Wednesday 17 September 2025 16:05:26 +0000 (0:00:01.388) 0:01:14.232 *** 2025-09-17 16:10:15.300164 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.300171 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.300179 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.300232 | orchestrator | 2025-09-17 16:10:15.300240 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-17 16:10:15.300248 | orchestrator | Wednesday 17 September 2025 16:05:28 +0000 (0:00:01.814) 0:01:16.047 *** 2025-09-17 16:10:15.300256 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.300264 | orchestrator | 2025-09-17 16:10:15.300271 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-17 16:10:15.300279 | orchestrator | Wednesday 17 September 2025 16:05:28 +0000 (0:00:00.570) 0:01:16.617 *** 2025-09-17 16:10:15.300295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.300305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.300340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.300371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300396 | orchestrator | 2025-09-17 16:10:15.300404 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-17 16:10:15.300412 | orchestrator | Wednesday 17 September 2025 16:05:32 +0000 (0:00:03.306) 0:01:19.923 *** 2025-09-17 16:10:15.300420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.300428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300449 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.300457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.300474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300491 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.300540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.300554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.300576 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.300583 | orchestrator | 2025-09-17 16:10:15.300591 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-17 16:10:15.300599 | orchestrator | Wednesday 17 September 2025 16:05:33 +0000 (0:00:01.523) 0:01:21.447 *** 2025-09-17 16:10:15.300607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300623 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.300631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300646 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.300654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-17 16:10:15.300669 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.300677 | orchestrator | 2025-09-17 16:10:15.300685 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-17 16:10:15.300692 | orchestrator | Wednesday 17 September 2025 16:05:34 +0000 (0:00:00.745) 0:01:22.192 *** 2025-09-17 16:10:15.300700 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.300708 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.300715 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.300723 | orchestrator | 2025-09-17 16:10:15.300730 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-17 16:10:15.300738 | orchestrator | Wednesday 17 September 2025 16:05:35 +0000 (0:00:01.268) 0:01:23.461 *** 2025-09-17 16:10:15.300745 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.300753 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.300759 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.300766 | orchestrator | 2025-09-17 16:10:15.300772 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-17 16:10:15.300779 | orchestrator | Wednesday 17 September 2025 16:05:37 +0000 (0:00:01.932) 0:01:25.394 *** 2025-09-17 16:10:15.300785 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.300792 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.300798 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.300804 | orchestrator | 2025-09-17 16:10:15.300811 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-17 16:10:15.300817 | orchestrator | Wednesday 17 September 2025 16:05:37 +0000 (0:00:00.386) 0:01:25.780 *** 2025-09-17 16:10:15.300824 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.300831 | orchestrator | 2025-09-17 16:10:15.300837 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-17 16:10:15.300843 | orchestrator | Wednesday 17 September 2025 16:05:38 +0000 (0:00:00.577) 0:01:26.357 *** 2025-09-17 16:10:15.300857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 16:10:15.300872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 16:10:15.300896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-17 16:10:15.300903 | orchestrator | 2025-09-17 16:10:15.300910 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-17 16:10:15.300916 | orchestrator | Wednesday 17 September 2025 16:05:40 +0000 (0:00:02.348) 0:01:28.705 *** 2025-09-17 16:10:15.300923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 16:10:15.300929 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.300936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 16:10:15.300947 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.300958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-17 16:10:15.300965 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.300971 | orchestrator | 2025-09-17 16:10:15.300978 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-17 16:10:15.300984 | orchestrator | Wednesday 17 September 2025 16:05:42 +0000 (0:00:01.461) 0:01:30.167 *** 2025-09-17 16:10:15.300993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301011 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301031 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-17 16:10:15.301056 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301062 | orchestrator | 2025-09-17 16:10:15.301069 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-17 16:10:15.301075 | orchestrator | Wednesday 17 September 2025 16:05:43 +0000 (0:00:01.403) 0:01:31.571 *** 2025-09-17 16:10:15.301082 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301088 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301094 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301101 | orchestrator | 2025-09-17 16:10:15.301107 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-17 16:10:15.301114 | orchestrator | Wednesday 17 September 2025 16:05:44 +0000 (0:00:00.386) 0:01:31.957 *** 2025-09-17 16:10:15.301120 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301126 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301133 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301139 | orchestrator | 2025-09-17 16:10:15.301146 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-17 16:10:15.301156 | orchestrator | Wednesday 17 September 2025 16:05:45 +0000 (0:00:01.295) 0:01:33.252 *** 2025-09-17 16:10:15.301163 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.301170 | orchestrator | 2025-09-17 16:10:15.301176 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-17 16:10:15.301195 | orchestrator | Wednesday 17 September 2025 16:05:46 +0000 (0:00:00.811) 0:01:34.064 *** 2025-09-17 16:10:15.301203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.301214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.301288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.301326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301376 | orchestrator | 2025-09-17 16:10:15.301386 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-17 16:10:15.301393 | orchestrator | Wednesday 17 September 2025 16:05:49 +0000 (0:00:03.013) 0:01:37.077 *** 2025-09-17 16:10:15.301400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.301412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301445 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.301462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301487 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.301505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301537 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301544 | orchestrator | 2025-09-17 16:10:15.301550 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-17 16:10:15.301557 | orchestrator | Wednesday 17 September 2025 16:05:49 +0000 (0:00:00.632) 0:01:37.710 *** 2025-09-17 16:10:15.301564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301577 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301597 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-17 16:10:15.301621 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301627 | orchestrator | 2025-09-17 16:10:15.301634 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-17 16:10:15.301640 | orchestrator | Wednesday 17 September 2025 16:05:51 +0000 (0:00:01.377) 0:01:39.088 *** 2025-09-17 16:10:15.301647 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.301653 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.301660 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.301666 | orchestrator | 2025-09-17 16:10:15.301673 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-17 16:10:15.301679 | orchestrator | Wednesday 17 September 2025 16:05:52 +0000 (0:00:01.393) 0:01:40.482 *** 2025-09-17 16:10:15.301686 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.301692 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.301704 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.301710 | orchestrator | 2025-09-17 16:10:15.301717 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-17 16:10:15.301723 | orchestrator | Wednesday 17 September 2025 16:05:54 +0000 (0:00:01.942) 0:01:42.425 *** 2025-09-17 16:10:15.301730 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301736 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301742 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301749 | orchestrator | 2025-09-17 16:10:15.301755 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-17 16:10:15.301762 | orchestrator | Wednesday 17 September 2025 16:05:54 +0000 (0:00:00.338) 0:01:42.763 *** 2025-09-17 16:10:15.301768 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.301775 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.301781 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.301787 | orchestrator | 2025-09-17 16:10:15.301797 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-17 16:10:15.301804 | orchestrator | Wednesday 17 September 2025 16:05:55 +0000 (0:00:00.562) 0:01:43.326 *** 2025-09-17 16:10:15.301810 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.301817 | orchestrator | 2025-09-17 16:10:15.301823 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-17 16:10:15.301830 | orchestrator | Wednesday 17 September 2025 16:05:56 +0000 (0:00:00.908) 0:01:44.235 *** 2025-09-17 16:10:15.301836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:10:15.301844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.301851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:10:15.301906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.301917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:10:15.301970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.301982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.301999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302045 | orchestrator | 2025-09-17 16:10:15.302052 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-17 16:10:15.302059 | orchestrator | Wednesday 17 September 2025 16:06:00 +0000 (0:00:04.510) 0:01:48.745 *** 2025-09-17 16:10:15.302070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:10:15.302081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.302092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302130 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.302142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:10:15.302149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.302159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302215 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.302222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:10:15.302233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:10:15.302240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.302286 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.302293 | orchestrator | 2025-09-17 16:10:15.302299 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-17 16:10:15.302306 | orchestrator | Wednesday 17 September 2025 16:06:02 +0000 (0:00:01.319) 0:01:50.065 *** 2025-09-17 16:10:15.302313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302330 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.302337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302350 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.302356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-17 16:10:15.302369 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.302376 | orchestrator | 2025-09-17 16:10:15.302382 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-17 16:10:15.302395 | orchestrator | Wednesday 17 September 2025 16:06:03 +0000 (0:00:01.000) 0:01:51.065 *** 2025-09-17 16:10:15.302402 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.302409 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.302415 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.302421 | orchestrator | 2025-09-17 16:10:15.302428 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-17 16:10:15.302434 | orchestrator | Wednesday 17 September 2025 16:06:04 +0000 (0:00:01.246) 0:01:52.312 *** 2025-09-17 16:10:15.302441 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.302447 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.302453 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.302460 | orchestrator | 2025-09-17 16:10:15.302466 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-17 16:10:15.302473 | orchestrator | Wednesday 17 September 2025 16:06:06 +0000 (0:00:01.817) 0:01:54.130 *** 2025-09-17 16:10:15.302479 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.302485 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.302492 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.302498 | orchestrator | 2025-09-17 16:10:15.302505 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-17 16:10:15.302511 | orchestrator | Wednesday 17 September 2025 16:06:06 +0000 (0:00:00.412) 0:01:54.542 *** 2025-09-17 16:10:15.302518 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.302524 | orchestrator | 2025-09-17 16:10:15.302530 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-17 16:10:15.302537 | orchestrator | Wednesday 17 September 2025 16:06:07 +0000 (0:00:00.762) 0:01:55.304 *** 2025-09-17 16:10:15.302550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:10:15.302562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:10:15.302591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:10:15.302619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302631 | orchestrator | 2025-09-17 16:10:15.302638 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-17 16:10:15.302645 | orchestrator | Wednesday 17 September 2025 16:06:11 +0000 (0:00:04.041) 0:01:59.346 *** 2025-09-17 16:10:15.302656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:10:15.302667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:10:15.302679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302687 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.302702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302714 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.302722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:10:15.302734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.302742 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.302749 | orchestrator | 2025-09-17 16:10:15.302755 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-17 16:10:15.302762 | orchestrator | Wednesday 17 September 2025 16:06:14 +0000 (0:00:02.966) 0:02:02.312 *** 2025-09-17 16:10:15.302776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.302784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.302791 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.302798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.302805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.302811 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.302818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.302829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': 2025-09-17 16:10:15 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:15.302836 | orchestrator | 2025-09-17 16:10:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:15.303347 | orchestrator | True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-17 16:10:15.303440 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.303459 | orchestrator | 2025-09-17 16:10:15.303473 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-17 16:10:15.303487 | orchestrator | Wednesday 17 September 2025 16:06:17 +0000 (0:00:02.951) 0:02:05.264 *** 2025-09-17 16:10:15.303521 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.303534 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.303547 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.303558 | orchestrator | 2025-09-17 16:10:15.303571 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-17 16:10:15.303583 | orchestrator | Wednesday 17 September 2025 16:06:18 +0000 (0:00:01.418) 0:02:06.682 *** 2025-09-17 16:10:15.303594 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.303605 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.303616 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.303626 | orchestrator | 2025-09-17 16:10:15.303637 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-17 16:10:15.303648 | orchestrator | Wednesday 17 September 2025 16:06:20 +0000 (0:00:02.043) 0:02:08.725 *** 2025-09-17 16:10:15.303659 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.303670 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.303680 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.303691 | orchestrator | 2025-09-17 16:10:15.303702 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-17 16:10:15.303713 | orchestrator | Wednesday 17 September 2025 16:06:21 +0000 (0:00:00.478) 0:02:09.204 *** 2025-09-17 16:10:15.303723 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.303734 | orchestrator | 2025-09-17 16:10:15.303745 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-17 16:10:15.303756 | orchestrator | Wednesday 17 September 2025 16:06:22 +0000 (0:00:00.823) 0:02:10.028 *** 2025-09-17 16:10:15.303769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:10:15.303785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:10:15.303798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:10:15.303809 | orchestrator | 2025-09-17 16:10:15.303820 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-17 16:10:15.303831 | orchestrator | Wednesday 17 September 2025 16:06:25 +0000 (0:00:03.030) 0:02:13.058 *** 2025-09-17 16:10:15.303893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:10:15.303909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:10:15.303920 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.303936 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.303948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:10:15.303959 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.303970 | orchestrator | 2025-09-17 16:10:15.303981 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-17 16:10:15.303992 | orchestrator | Wednesday 17 September 2025 16:06:25 +0000 (0:00:00.618) 0:02:13.676 *** 2025-09-17 16:10:15.304003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304027 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.304038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304060 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.304071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-17 16:10:15.304099 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.304110 | orchestrator | 2025-09-17 16:10:15.304121 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-17 16:10:15.304131 | orchestrator | Wednesday 17 September 2025 16:06:26 +0000 (0:00:00.663) 0:02:14.340 *** 2025-09-17 16:10:15.304142 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.304153 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.304164 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.304174 | orchestrator | 2025-09-17 16:10:15.304211 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-17 16:10:15.304224 | orchestrator | Wednesday 17 September 2025 16:06:27 +0000 (0:00:01.278) 0:02:15.618 *** 2025-09-17 16:10:15.304235 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.304246 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.304257 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.304268 | orchestrator | 2025-09-17 16:10:15.304287 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-17 16:10:15.304298 | orchestrator | Wednesday 17 September 2025 16:06:29 +0000 (0:00:02.001) 0:02:17.620 *** 2025-09-17 16:10:15.304309 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.304320 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.304330 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.304341 | orchestrator | 2025-09-17 16:10:15.304351 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-17 16:10:15.304362 | orchestrator | Wednesday 17 September 2025 16:06:30 +0000 (0:00:00.509) 0:02:18.129 *** 2025-09-17 16:10:15.304373 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.304383 | orchestrator | 2025-09-17 16:10:15.304394 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-17 16:10:15.304405 | orchestrator | Wednesday 17 September 2025 16:06:31 +0000 (0:00:00.889) 0:02:19.019 *** 2025-09-17 16:10:15.304426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:10:15.304457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:10:15.304478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:10:15.304497 | orchestrator | 2025-09-17 16:10:15.304507 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-17 16:10:15.304518 | orchestrator | Wednesday 17 September 2025 16:06:35 +0000 (0:00:04.094) 0:02:23.113 *** 2025-09-17 16:10:15.304543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:10:15.304556 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.304568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:10:15.304586 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.304611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:10:15.304623 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.304634 | orchestrator | 2025-09-17 16:10:15.304645 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-17 16:10:15.304655 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:01.503) 0:02:24.617 *** 2025-09-17 16:10:15.304667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 16:10:15.304745 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.304756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 16:10:15.304807 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.304818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-17 16:10:15.304866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-17 16:10:15.304878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-17 16:10:15.304889 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.304900 | orchestrator | 2025-09-17 16:10:15.304911 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-17 16:10:15.304921 | orchestrator | Wednesday 17 September 2025 16:06:37 +0000 (0:00:01.163) 0:02:25.780 *** 2025-09-17 16:10:15.304932 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.304943 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.304953 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.304964 | orchestrator | 2025-09-17 16:10:15.304975 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-17 16:10:15.304986 | orchestrator | Wednesday 17 September 2025 16:06:39 +0000 (0:00:01.323) 0:02:27.104 *** 2025-09-17 16:10:15.304996 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.305007 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.305018 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.305029 | orchestrator | 2025-09-17 16:10:15.305040 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-17 16:10:15.305051 | orchestrator | Wednesday 17 September 2025 16:06:41 +0000 (0:00:02.148) 0:02:29.252 *** 2025-09-17 16:10:15.305062 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.305072 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.305083 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.305094 | orchestrator | 2025-09-17 16:10:15.305104 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-17 16:10:15.305115 | orchestrator | Wednesday 17 September 2025 16:06:41 +0000 (0:00:00.498) 0:02:29.751 *** 2025-09-17 16:10:15.305126 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.305136 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.305147 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.305157 | orchestrator | 2025-09-17 16:10:15.305168 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-17 16:10:15.305178 | orchestrator | Wednesday 17 September 2025 16:06:42 +0000 (0:00:00.291) 0:02:30.042 *** 2025-09-17 16:10:15.305257 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.305270 | orchestrator | 2025-09-17 16:10:15.305281 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-17 16:10:15.305292 | orchestrator | Wednesday 17 September 2025 16:06:43 +0000 (0:00:00.943) 0:02:30.986 *** 2025-09-17 16:10:15.305314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:10:15.305341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:10:15.305379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:10:15.305433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305456 | orchestrator | 2025-09-17 16:10:15.305468 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-17 16:10:15.305479 | orchestrator | Wednesday 17 September 2025 16:06:47 +0000 (0:00:03.924) 0:02:34.910 *** 2025-09-17 16:10:15.305490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:10:15.305509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305538 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.305555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:10:15.305567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305590 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.305609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:10:15.305635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:10:15.305651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:10:15.305662 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.305673 | orchestrator | 2025-09-17 16:10:15.305684 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-17 16:10:15.305695 | orchestrator | Wednesday 17 September 2025 16:06:47 +0000 (0:00:00.701) 0:02:35.612 *** 2025-09-17 16:10:15.305707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305731 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.305742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305764 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.305774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-17 16:10:15.305794 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.305804 | orchestrator | 2025-09-17 16:10:15.305813 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-17 16:10:15.305823 | orchestrator | Wednesday 17 September 2025 16:06:48 +0000 (0:00:00.871) 0:02:36.483 *** 2025-09-17 16:10:15.305833 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.305842 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.305852 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.305862 | orchestrator | 2025-09-17 16:10:15.305876 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-17 16:10:15.305885 | orchestrator | Wednesday 17 September 2025 16:06:50 +0000 (0:00:01.585) 0:02:38.068 *** 2025-09-17 16:10:15.305895 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.305904 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.305914 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.305924 | orchestrator | 2025-09-17 16:10:15.305933 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-17 16:10:15.305949 | orchestrator | Wednesday 17 September 2025 16:06:52 +0000 (0:00:02.211) 0:02:40.280 *** 2025-09-17 16:10:15.305959 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.305968 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.305978 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.305987 | orchestrator | 2025-09-17 16:10:15.305997 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-17 16:10:15.306006 | orchestrator | Wednesday 17 September 2025 16:06:52 +0000 (0:00:00.320) 0:02:40.601 *** 2025-09-17 16:10:15.306058 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.306072 | orchestrator | 2025-09-17 16:10:15.306082 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-17 16:10:15.306091 | orchestrator | Wednesday 17 September 2025 16:06:53 +0000 (0:00:00.914) 0:02:41.515 *** 2025-09-17 16:10:15.306106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:10:15.306118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:10:15.306146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:10:15.306178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306213 | orchestrator | 2025-09-17 16:10:15.306224 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-17 16:10:15.306234 | orchestrator | Wednesday 17 September 2025 16:06:57 +0000 (0:00:03.747) 0:02:45.263 *** 2025-09-17 16:10:15.306244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:10:15.306255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306270 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.306319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:10:15.306330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306340 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.306354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:10:15.306365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306374 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.306393 | orchestrator | 2025-09-17 16:10:15.306403 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-17 16:10:15.306413 | orchestrator | Wednesday 17 September 2025 16:06:58 +0000 (0:00:00.636) 0:02:45.900 *** 2025-09-17 16:10:15.306423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306443 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.306452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306471 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.306481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-17 16:10:15.306507 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.306517 | orchestrator | 2025-09-17 16:10:15.306526 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-17 16:10:15.306536 | orchestrator | Wednesday 17 September 2025 16:06:58 +0000 (0:00:00.831) 0:02:46.732 *** 2025-09-17 16:10:15.306545 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.306555 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.306564 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.306573 | orchestrator | 2025-09-17 16:10:15.306583 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-17 16:10:15.306593 | orchestrator | Wednesday 17 September 2025 16:07:00 +0000 (0:00:01.614) 0:02:48.346 *** 2025-09-17 16:10:15.306602 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.306611 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.306621 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.306630 | orchestrator | 2025-09-17 16:10:15.306639 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-17 16:10:15.306649 | orchestrator | Wednesday 17 September 2025 16:07:02 +0000 (0:00:02.014) 0:02:50.361 *** 2025-09-17 16:10:15.306658 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.306667 | orchestrator | 2025-09-17 16:10:15.306677 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-17 16:10:15.306686 | orchestrator | Wednesday 17 September 2025 16:07:03 +0000 (0:00:01.031) 0:02:51.393 *** 2025-09-17 16:10:15.306700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 16:10:15.306716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 16:10:15.306737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-17 16:10:15.306814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306851 | orchestrator | 2025-09-17 16:10:15.306861 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-17 16:10:15.306870 | orchestrator | Wednesday 17 September 2025 16:07:07 +0000 (0:00:03.575) 0:02:54.969 *** 2025-09-17 16:10:15.306884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 16:10:15.306900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306931 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.306947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 16:10:15.306961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-17 16:10:15.306987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.306997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.307013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.307023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.307034 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.307064 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.307074 | orchestrator | 2025-09-17 16:10:15.307084 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-17 16:10:15.307094 | orchestrator | Wednesday 17 September 2025 16:07:07 +0000 (0:00:00.570) 0:02:55.539 *** 2025-09-17 16:10:15.307104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307123 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.307133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307152 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-17 16:10:15.307181 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.307219 | orchestrator | 2025-09-17 16:10:15.307237 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-17 16:10:15.307253 | orchestrator | Wednesday 17 September 2025 16:07:08 +0000 (0:00:00.752) 0:02:56.291 *** 2025-09-17 16:10:15.307267 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.307277 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.307287 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.307297 | orchestrator | 2025-09-17 16:10:15.307306 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-17 16:10:15.307316 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:01.224) 0:02:57.516 *** 2025-09-17 16:10:15.307325 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.307335 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.307344 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.307354 | orchestrator | 2025-09-17 16:10:15.307363 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-17 16:10:15.307373 | orchestrator | Wednesday 17 September 2025 16:07:11 +0000 (0:00:01.975) 0:02:59.492 *** 2025-09-17 16:10:15.307382 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.307392 | orchestrator | 2025-09-17 16:10:15.307401 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-17 16:10:15.307411 | orchestrator | Wednesday 17 September 2025 16:07:12 +0000 (0:00:01.158) 0:03:00.650 *** 2025-09-17 16:10:15.307420 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:10:15.307430 | orchestrator | 2025-09-17 16:10:15.307440 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-17 16:10:15.307456 | orchestrator | Wednesday 17 September 2025 16:07:16 +0000 (0:00:03.164) 0:03:03.814 *** 2025-09-17 16:10:15.307480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307503 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.307520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307548 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307584 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.307593 | orchestrator | 2025-09-17 16:10:15.307603 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-17 16:10:15.307613 | orchestrator | Wednesday 17 September 2025 16:07:18 +0000 (0:00:02.404) 0:03:06.219 *** 2025-09-17 16:10:15.307637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307664 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.307675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307707 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:10:15.307732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-17 16:10:15.307743 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.307752 | orchestrator | 2025-09-17 16:10:15.307762 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-17 16:10:15.307772 | orchestrator | Wednesday 17 September 2025 16:07:20 +0000 (0:00:02.276) 0:03:08.495 *** 2025-09-17 16:10:15.307782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307813 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.307823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307846 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-17 16:10:15.307877 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.307886 | orchestrator | 2025-09-17 16:10:15.307896 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-17 16:10:15.307905 | orchestrator | Wednesday 17 September 2025 16:07:22 +0000 (0:00:02.095) 0:03:10.590 *** 2025-09-17 16:10:15.307915 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.307924 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.307934 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.307949 | orchestrator | 2025-09-17 16:10:15.307959 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-17 16:10:15.307969 | orchestrator | Wednesday 17 September 2025 16:07:24 +0000 (0:00:02.056) 0:03:12.647 *** 2025-09-17 16:10:15.307978 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.307988 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.307997 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308007 | orchestrator | 2025-09-17 16:10:15.308016 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-17 16:10:15.308026 | orchestrator | Wednesday 17 September 2025 16:07:26 +0000 (0:00:01.465) 0:03:14.113 *** 2025-09-17 16:10:15.308036 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308045 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308055 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308064 | orchestrator | 2025-09-17 16:10:15.308074 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-17 16:10:15.308084 | orchestrator | Wednesday 17 September 2025 16:07:26 +0000 (0:00:00.499) 0:03:14.612 *** 2025-09-17 16:10:15.308093 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.308103 | orchestrator | 2025-09-17 16:10:15.308113 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-17 16:10:15.308122 | orchestrator | Wednesday 17 September 2025 16:07:27 +0000 (0:00:01.053) 0:03:15.666 *** 2025-09-17 16:10:15.308139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 16:10:15.308150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 16:10:15.308165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-17 16:10:15.308176 | orchestrator | 2025-09-17 16:10:15.308238 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-17 16:10:15.308259 | orchestrator | Wednesday 17 September 2025 16:07:29 +0000 (0:00:01.504) 0:03:17.171 *** 2025-09-17 16:10:15.308270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 16:10:15.308280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 16:10:15.308291 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308300 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-17 16:10:15.308328 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308338 | orchestrator | 2025-09-17 16:10:15.308346 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-17 16:10:15.308354 | orchestrator | Wednesday 17 September 2025 16:07:30 +0000 (0:00:00.676) 0:03:17.848 *** 2025-09-17 16:10:15.308362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 16:10:15.308370 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 16:10:15.308387 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-17 16:10:15.308403 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308415 | orchestrator | 2025-09-17 16:10:15.308423 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-17 16:10:15.308431 | orchestrator | Wednesday 17 September 2025 16:07:30 +0000 (0:00:00.601) 0:03:18.449 *** 2025-09-17 16:10:15.308439 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308447 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308455 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308463 | orchestrator | 2025-09-17 16:10:15.308485 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-17 16:10:15.308493 | orchestrator | Wednesday 17 September 2025 16:07:31 +0000 (0:00:00.486) 0:03:18.936 *** 2025-09-17 16:10:15.308501 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308509 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308517 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308524 | orchestrator | 2025-09-17 16:10:15.308532 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-17 16:10:15.308540 | orchestrator | Wednesday 17 September 2025 16:07:32 +0000 (0:00:01.279) 0:03:20.216 *** 2025-09-17 16:10:15.308548 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.308555 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.308564 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.308571 | orchestrator | 2025-09-17 16:10:15.308579 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-17 16:10:15.308587 | orchestrator | Wednesday 17 September 2025 16:07:32 +0000 (0:00:00.526) 0:03:20.743 *** 2025-09-17 16:10:15.308595 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.308603 | orchestrator | 2025-09-17 16:10:15.308611 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-17 16:10:15.308619 | orchestrator | Wednesday 17 September 2025 16:07:34 +0000 (0:00:01.202) 0:03:21.945 *** 2025-09-17 16:10:15.308627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:10:15.308642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:10:15.308667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.308707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.308749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.308762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.308771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.308807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.308816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.308824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.308958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:10:15.309147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.309228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309355 | orchestrator | 2025-09-17 16:10:15.309364 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-17 16:10:15.309372 | orchestrator | Wednesday 17 September 2025 16:07:38 +0000 (0:00:04.528) 0:03:26.474 *** 2025-09-17 16:10:15.309381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:10:15.309389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.309437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:10:15.309500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.309588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309659 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.309669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:10:15.309704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-17 16:10:15.309795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309863 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.309873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-17 16:10:15.309929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-17 16:10:15.309953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:10:15.309966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.309974 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.309981 | orchestrator | 2025-09-17 16:10:15.309989 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-17 16:10:15.309997 | orchestrator | Wednesday 17 September 2025 16:07:40 +0000 (0:00:01.510) 0:03:27.984 *** 2025-09-17 16:10:15.310005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310169 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.310179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310216 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.310224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-17 16:10:15.310240 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.310248 | orchestrator | 2025-09-17 16:10:15.310256 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-17 16:10:15.310264 | orchestrator | Wednesday 17 September 2025 16:07:41 +0000 (0:00:01.484) 0:03:29.469 *** 2025-09-17 16:10:15.310272 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.310279 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.310287 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.310295 | orchestrator | 2025-09-17 16:10:15.310303 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-17 16:10:15.310311 | orchestrator | Wednesday 17 September 2025 16:07:43 +0000 (0:00:01.876) 0:03:31.346 *** 2025-09-17 16:10:15.310319 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.310327 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.310334 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.310342 | orchestrator | 2025-09-17 16:10:15.310350 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-17 16:10:15.310358 | orchestrator | Wednesday 17 September 2025 16:07:45 +0000 (0:00:02.111) 0:03:33.458 *** 2025-09-17 16:10:15.310366 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.310374 | orchestrator | 2025-09-17 16:10:15.310381 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-17 16:10:15.310389 | orchestrator | Wednesday 17 September 2025 16:07:46 +0000 (0:00:01.175) 0:03:34.634 *** 2025-09-17 16:10:15.310397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310460 | orchestrator | 2025-09-17 16:10:15.310468 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-17 16:10:15.310476 | orchestrator | Wednesday 17 September 2025 16:07:50 +0000 (0:00:03.244) 0:03:37.878 *** 2025-09-17 16:10:15.310484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.310492 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.310500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.310509 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.310537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.310551 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.310559 | orchestrator | 2025-09-17 16:10:15.310567 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-17 16:10:15.310575 | orchestrator | Wednesday 17 September 2025 16:07:50 +0000 (0:00:00.830) 0:03:38.708 *** 2025-09-17 16:10:15.310583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310603 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.310611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310627 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.310634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-17 16:10:15.310650 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.310658 | orchestrator | 2025-09-17 16:10:15.310665 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-17 16:10:15.310673 | orchestrator | Wednesday 17 September 2025 16:07:51 +0000 (0:00:00.775) 0:03:39.484 *** 2025-09-17 16:10:15.310681 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.310689 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.310696 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.310704 | orchestrator | 2025-09-17 16:10:15.310712 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-17 16:10:15.310719 | orchestrator | Wednesday 17 September 2025 16:07:52 +0000 (0:00:01.273) 0:03:40.758 *** 2025-09-17 16:10:15.310727 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.310735 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.310742 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.310751 | orchestrator | 2025-09-17 16:10:15.310760 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-17 16:10:15.310768 | orchestrator | Wednesday 17 September 2025 16:07:55 +0000 (0:00:02.042) 0:03:42.800 *** 2025-09-17 16:10:15.310778 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.310786 | orchestrator | 2025-09-17 16:10:15.310795 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-17 16:10:15.310804 | orchestrator | Wednesday 17 September 2025 16:07:56 +0000 (0:00:01.441) 0:03:44.242 *** 2025-09-17 16:10:15.310833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.310940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.310959 | orchestrator | 2025-09-17 16:10:15.310968 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-17 16:10:15.310977 | orchestrator | Wednesday 17 September 2025 16:08:00 +0000 (0:00:03.782) 0:03:48.024 *** 2025-09-17 16:10:15.310987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.311021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311041 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.311054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.311064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311087 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.311116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.311130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.311147 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.311155 | orchestrator | 2025-09-17 16:10:15.311163 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-17 16:10:15.311171 | orchestrator | Wednesday 17 September 2025 16:08:00 +0000 (0:00:00.627) 0:03:48.651 *** 2025-09-17 16:10:15.311179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311307 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.311315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311347 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.311376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-17 16:10:15.311409 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.311417 | orchestrator | 2025-09-17 16:10:15.311425 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-17 16:10:15.311432 | orchestrator | Wednesday 17 September 2025 16:08:02 +0000 (0:00:01.223) 0:03:49.875 *** 2025-09-17 16:10:15.311440 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.311448 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.311456 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.311463 | orchestrator | 2025-09-17 16:10:15.311472 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-17 16:10:15.311479 | orchestrator | Wednesday 17 September 2025 16:08:03 +0000 (0:00:01.294) 0:03:51.170 *** 2025-09-17 16:10:15.311490 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.311498 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.311506 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.311514 | orchestrator | 2025-09-17 16:10:15.311522 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-17 16:10:15.311530 | orchestrator | Wednesday 17 September 2025 16:08:05 +0000 (0:00:01.871) 0:03:53.041 *** 2025-09-17 16:10:15.311538 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.311546 | orchestrator | 2025-09-17 16:10:15.311553 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-17 16:10:15.311561 | orchestrator | Wednesday 17 September 2025 16:08:06 +0000 (0:00:01.542) 0:03:54.584 *** 2025-09-17 16:10:15.311569 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-17 16:10:15.311577 | orchestrator | 2025-09-17 16:10:15.311585 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-17 16:10:15.311597 | orchestrator | Wednesday 17 September 2025 16:08:07 +0000 (0:00:00.822) 0:03:55.407 *** 2025-09-17 16:10:15.311606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 16:10:15.311615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 16:10:15.311623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-17 16:10:15.311631 | orchestrator | 2025-09-17 16:10:15.311639 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-17 16:10:15.311647 | orchestrator | Wednesday 17 September 2025 16:08:11 +0000 (0:00:04.009) 0:03:59.416 *** 2025-09-17 16:10:15.311673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311683 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.311691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311699 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.311711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311719 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.311727 | orchestrator | 2025-09-17 16:10:15.311735 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-17 16:10:15.311742 | orchestrator | Wednesday 17 September 2025 16:08:12 +0000 (0:00:01.369) 0:04:00.785 *** 2025-09-17 16:10:15.311756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311770 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.311777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311790 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.311797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-17 16:10:15.311810 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.311817 | orchestrator | 2025-09-17 16:10:15.311824 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 16:10:15.311830 | orchestrator | Wednesday 17 September 2025 16:08:14 +0000 (0:00:01.471) 0:04:02.257 *** 2025-09-17 16:10:15.311837 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.311843 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.311850 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.311856 | orchestrator | 2025-09-17 16:10:15.311863 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 16:10:15.311869 | orchestrator | Wednesday 17 September 2025 16:08:16 +0000 (0:00:02.415) 0:04:04.672 *** 2025-09-17 16:10:15.311876 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.311882 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.311888 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.311895 | orchestrator | 2025-09-17 16:10:15.311901 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-17 16:10:15.311908 | orchestrator | Wednesday 17 September 2025 16:08:19 +0000 (0:00:02.901) 0:04:07.574 *** 2025-09-17 16:10:15.311915 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-17 16:10:15.311921 | orchestrator | 2025-09-17 16:10:15.311928 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-17 16:10:15.311950 | orchestrator | Wednesday 17 September 2025 16:08:21 +0000 (0:00:01.466) 0:04:09.041 *** 2025-09-17 16:10:15.311958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311965 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.311977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311983 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.311990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.311997 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312004 | orchestrator | 2025-09-17 16:10:15.312010 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-17 16:10:15.312017 | orchestrator | Wednesday 17 September 2025 16:08:22 +0000 (0:00:01.290) 0:04:10.331 *** 2025-09-17 16:10:15.312024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.312030 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.312044 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-17 16:10:15.312132 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312138 | orchestrator | 2025-09-17 16:10:15.312145 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-17 16:10:15.312152 | orchestrator | Wednesday 17 September 2025 16:08:23 +0000 (0:00:01.197) 0:04:11.529 *** 2025-09-17 16:10:15.312158 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312165 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312171 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312178 | orchestrator | 2025-09-17 16:10:15.312198 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 16:10:15.312225 | orchestrator | Wednesday 17 September 2025 16:08:25 +0000 (0:00:01.579) 0:04:13.108 *** 2025-09-17 16:10:15.312238 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.312245 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.312252 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.312258 | orchestrator | 2025-09-17 16:10:15.312264 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 16:10:15.312271 | orchestrator | Wednesday 17 September 2025 16:08:27 +0000 (0:00:02.005) 0:04:15.113 *** 2025-09-17 16:10:15.312278 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.312284 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.312290 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.312297 | orchestrator | 2025-09-17 16:10:15.312303 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-17 16:10:15.312310 | orchestrator | Wednesday 17 September 2025 16:08:29 +0000 (0:00:02.548) 0:04:17.662 *** 2025-09-17 16:10:15.312316 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-17 16:10:15.312323 | orchestrator | 2025-09-17 16:10:15.312329 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-17 16:10:15.312336 | orchestrator | Wednesday 17 September 2025 16:08:30 +0000 (0:00:00.737) 0:04:18.399 *** 2025-09-17 16:10:15.312346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312354 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312367 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312381 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312387 | orchestrator | 2025-09-17 16:10:15.312394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-17 16:10:15.312400 | orchestrator | Wednesday 17 September 2025 16:08:31 +0000 (0:00:01.336) 0:04:19.736 *** 2025-09-17 16:10:15.312407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312413 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312432 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-17 16:10:15.312461 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312468 | orchestrator | 2025-09-17 16:10:15.312475 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-17 16:10:15.312481 | orchestrator | Wednesday 17 September 2025 16:08:33 +0000 (0:00:01.336) 0:04:21.072 *** 2025-09-17 16:10:15.312488 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312495 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312501 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.312508 | orchestrator | 2025-09-17 16:10:15.312514 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-17 16:10:15.312521 | orchestrator | Wednesday 17 September 2025 16:08:34 +0000 (0:00:01.513) 0:04:22.585 *** 2025-09-17 16:10:15.312527 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.312534 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.312540 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.312547 | orchestrator | 2025-09-17 16:10:15.312553 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-17 16:10:15.312560 | orchestrator | Wednesday 17 September 2025 16:08:37 +0000 (0:00:02.462) 0:04:25.047 *** 2025-09-17 16:10:15.312566 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.312573 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.312579 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.312586 | orchestrator | 2025-09-17 16:10:15.312597 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-17 16:10:15.312604 | orchestrator | Wednesday 17 September 2025 16:08:40 +0000 (0:00:02.848) 0:04:27.895 *** 2025-09-17 16:10:15.312610 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.312617 | orchestrator | 2025-09-17 16:10:15.312624 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-17 16:10:15.312630 | orchestrator | Wednesday 17 September 2025 16:08:41 +0000 (0:00:01.593) 0:04:29.490 *** 2025-09-17 16:10:15.312637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.312650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.312657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.312745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.312753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.312763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.312795 | orchestrator | 2025-09-17 16:10:15.312802 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-17 16:10:15.312808 | orchestrator | Wednesday 17 September 2025 16:08:45 +0000 (0:00:03.494) 0:04:32.984 *** 2025-09-17 16:10:15.312830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.312838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.312873 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.312881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.312902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.312937 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.312944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.312951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-17 16:10:15.312973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-17 16:10:15.312993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:10:15.313000 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313011 | orchestrator | 2025-09-17 16:10:15.313017 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-17 16:10:15.313024 | orchestrator | Wednesday 17 September 2025 16:08:46 +0000 (0:00:00.997) 0:04:33.981 *** 2025-09-17 16:10:15.313031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313044 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.313051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313064 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.313071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-17 16:10:15.313084 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313091 | orchestrator | 2025-09-17 16:10:15.313097 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-17 16:10:15.313104 | orchestrator | Wednesday 17 September 2025 16:08:47 +0000 (0:00:01.226) 0:04:35.208 *** 2025-09-17 16:10:15.313110 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.313117 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.313123 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.313130 | orchestrator | 2025-09-17 16:10:15.313136 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-17 16:10:15.313143 | orchestrator | Wednesday 17 September 2025 16:08:48 +0000 (0:00:01.434) 0:04:36.643 *** 2025-09-17 16:10:15.313150 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.313156 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.313162 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.313169 | orchestrator | 2025-09-17 16:10:15.313176 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-17 16:10:15.313193 | orchestrator | Wednesday 17 September 2025 16:08:50 +0000 (0:00:02.099) 0:04:38.742 *** 2025-09-17 16:10:15.313201 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.313207 | orchestrator | 2025-09-17 16:10:15.313214 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-17 16:10:15.313220 | orchestrator | Wednesday 17 September 2025 16:08:52 +0000 (0:00:01.597) 0:04:40.339 *** 2025-09-17 16:10:15.313244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:10:15.313260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:10:15.313267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:10:15.313275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:10:15.313298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:10:15.313314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:10:15.313322 | orchestrator | 2025-09-17 16:10:15.313329 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-17 16:10:15.313336 | orchestrator | Wednesday 17 September 2025 16:08:57 +0000 (0:00:05.072) 0:04:45.412 *** 2025-09-17 16:10:15.313343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:10:15.313350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:10:15.313357 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.313379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:10:15.313396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:10:15.313403 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.313410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:10:15.313417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:10:15.313424 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313431 | orchestrator | 2025-09-17 16:10:15.313438 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-17 16:10:15.313444 | orchestrator | Wednesday 17 September 2025 16:08:58 +0000 (0:00:00.619) 0:04:46.032 *** 2025-09-17 16:10:15.313451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 16:10:15.313458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 16:10:15.313487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313509 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.313516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-17 16:10:15.313523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313533 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.313540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-17 16:10:15.313553 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313560 | orchestrator | 2025-09-17 16:10:15.313566 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-17 16:10:15.313573 | orchestrator | Wednesday 17 September 2025 16:08:59 +0000 (0:00:01.552) 0:04:47.584 *** 2025-09-17 16:10:15.313580 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.313586 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.313593 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313599 | orchestrator | 2025-09-17 16:10:15.313606 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-17 16:10:15.313612 | orchestrator | Wednesday 17 September 2025 16:09:00 +0000 (0:00:00.436) 0:04:48.021 *** 2025-09-17 16:10:15.313619 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.313625 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.313631 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.313638 | orchestrator | 2025-09-17 16:10:15.313645 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-17 16:10:15.313651 | orchestrator | Wednesday 17 September 2025 16:09:01 +0000 (0:00:01.293) 0:04:49.314 *** 2025-09-17 16:10:15.313658 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.313664 | orchestrator | 2025-09-17 16:10:15.313670 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-17 16:10:15.313677 | orchestrator | Wednesday 17 September 2025 16:09:03 +0000 (0:00:01.616) 0:04:50.930 *** 2025-09-17 16:10:15.313684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:10:15.313695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.313717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:10:15.313743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.313762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:10:15.313791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.313808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:10:15.313852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.313864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:10:15.313883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.313902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:10:15.313912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.313940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.313974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.313981 | orchestrator | 2025-09-17 16:10:15.313988 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-17 16:10:15.313995 | orchestrator | Wednesday 17 September 2025 16:09:07 +0000 (0:00:04.072) 0:04:55.002 *** 2025-09-17 16:10:15.314002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 16:10:15.314044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.314051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 16:10:15.314088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.314100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 16:10:15.314114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.314132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314139 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 16:10:15.314224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.314233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 16:10:15.314256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:10:15.314280 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 16:10:15.314328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-17 16:10:15.314335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:10:15.314352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:10:15.314359 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314365 | orchestrator | 2025-09-17 16:10:15.314372 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-17 16:10:15.314379 | orchestrator | Wednesday 17 September 2025 16:09:08 +0000 (0:00:00.833) 0:04:55.836 *** 2025-09-17 16:10:15.314386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314423 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314458 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-17 16:10:15.314478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-17 16:10:15.314491 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314498 | orchestrator | 2025-09-17 16:10:15.314505 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-17 16:10:15.314512 | orchestrator | Wednesday 17 September 2025 16:09:09 +0000 (0:00:01.229) 0:04:57.065 *** 2025-09-17 16:10:15.314518 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314524 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314531 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314537 | orchestrator | 2025-09-17 16:10:15.314544 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-17 16:10:15.314550 | orchestrator | Wednesday 17 September 2025 16:09:09 +0000 (0:00:00.470) 0:04:57.536 *** 2025-09-17 16:10:15.314561 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314567 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314574 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314580 | orchestrator | 2025-09-17 16:10:15.314587 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-17 16:10:15.314593 | orchestrator | Wednesday 17 September 2025 16:09:11 +0000 (0:00:01.282) 0:04:58.819 *** 2025-09-17 16:10:15.314604 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.314611 | orchestrator | 2025-09-17 16:10:15.314618 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-17 16:10:15.314624 | orchestrator | Wednesday 17 September 2025 16:09:12 +0000 (0:00:01.771) 0:05:00.590 *** 2025-09-17 16:10:15.314635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:10:15.314643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:10:15.314650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-17 16:10:15.314657 | orchestrator | 2025-09-17 16:10:15.314664 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-17 16:10:15.314671 | orchestrator | Wednesday 17 September 2025 16:09:14 +0000 (0:00:02.141) 0:05:02.731 *** 2025-09-17 16:10:15.314681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 16:10:15.314693 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 16:10:15.314710 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-17 16:10:15.314724 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314731 | orchestrator | 2025-09-17 16:10:15.314737 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-17 16:10:15.314743 | orchestrator | Wednesday 17 September 2025 16:09:15 +0000 (0:00:00.412) 0:05:03.144 *** 2025-09-17 16:10:15.314749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 16:10:15.314755 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 16:10:15.314767 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-17 16:10:15.314779 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314785 | orchestrator | 2025-09-17 16:10:15.314791 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-17 16:10:15.314797 | orchestrator | Wednesday 17 September 2025 16:09:15 +0000 (0:00:00.629) 0:05:03.774 *** 2025-09-17 16:10:15.314807 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314813 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314819 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314825 | orchestrator | 2025-09-17 16:10:15.314831 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-17 16:10:15.314837 | orchestrator | Wednesday 17 September 2025 16:09:16 +0000 (0:00:00.788) 0:05:04.562 *** 2025-09-17 16:10:15.314843 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.314849 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.314855 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.314861 | orchestrator | 2025-09-17 16:10:15.314867 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-17 16:10:15.314876 | orchestrator | Wednesday 17 September 2025 16:09:18 +0000 (0:00:01.294) 0:05:05.857 *** 2025-09-17 16:10:15.314883 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:10:15.314889 | orchestrator | 2025-09-17 16:10:15.314895 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-17 16:10:15.314901 | orchestrator | Wednesday 17 September 2025 16:09:19 +0000 (0:00:01.425) 0:05:07.283 *** 2025-09-17 16:10:15.314907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-17 16:10:15.314961 | orchestrator | 2025-09-17 16:10:15.314967 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-17 16:10:15.314973 | orchestrator | Wednesday 17 September 2025 16:09:25 +0000 (0:00:06.488) 0:05:13.771 *** 2025-09-17 16:10:15.314980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.314990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.314997 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.315017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.315023 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.315040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-17 16:10:15.315046 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315052 | orchestrator | 2025-09-17 16:10:15.315059 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-17 16:10:15.315065 | orchestrator | Wednesday 17 September 2025 16:09:26 +0000 (0:00:00.687) 0:05:14.458 *** 2025-09-17 16:10:15.315071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315099 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315139 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-17 16:10:15.315168 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315174 | orchestrator | 2025-09-17 16:10:15.315180 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-17 16:10:15.315200 | orchestrator | Wednesday 17 September 2025 16:09:27 +0000 (0:00:00.932) 0:05:15.391 *** 2025-09-17 16:10:15.315206 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.315212 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.315218 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.315224 | orchestrator | 2025-09-17 16:10:15.315230 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-17 16:10:15.315237 | orchestrator | Wednesday 17 September 2025 16:09:29 +0000 (0:00:02.241) 0:05:17.633 *** 2025-09-17 16:10:15.315243 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.315249 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.315255 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.315261 | orchestrator | 2025-09-17 16:10:15.315267 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-17 16:10:15.315273 | orchestrator | Wednesday 17 September 2025 16:09:32 +0000 (0:00:02.278) 0:05:19.911 *** 2025-09-17 16:10:15.315279 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315285 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315291 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315297 | orchestrator | 2025-09-17 16:10:15.315303 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-17 16:10:15.315309 | orchestrator | Wednesday 17 September 2025 16:09:32 +0000 (0:00:00.351) 0:05:20.262 *** 2025-09-17 16:10:15.315315 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315321 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315327 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315333 | orchestrator | 2025-09-17 16:10:15.315340 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-17 16:10:15.315346 | orchestrator | Wednesday 17 September 2025 16:09:32 +0000 (0:00:00.351) 0:05:20.614 *** 2025-09-17 16:10:15.315352 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315358 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315364 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315370 | orchestrator | 2025-09-17 16:10:15.315376 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-17 16:10:15.315382 | orchestrator | Wednesday 17 September 2025 16:09:33 +0000 (0:00:00.324) 0:05:20.939 *** 2025-09-17 16:10:15.315392 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315398 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315404 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315410 | orchestrator | 2025-09-17 16:10:15.315416 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-17 16:10:15.315422 | orchestrator | Wednesday 17 September 2025 16:09:33 +0000 (0:00:00.608) 0:05:21.547 *** 2025-09-17 16:10:15.315429 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315434 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315441 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315447 | orchestrator | 2025-09-17 16:10:15.315453 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-17 16:10:15.315459 | orchestrator | Wednesday 17 September 2025 16:09:34 +0000 (0:00:00.344) 0:05:21.892 *** 2025-09-17 16:10:15.315465 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315471 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315477 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315488 | orchestrator | 2025-09-17 16:10:15.315494 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-17 16:10:15.315500 | orchestrator | Wednesday 17 September 2025 16:09:34 +0000 (0:00:00.530) 0:05:22.422 *** 2025-09-17 16:10:15.315506 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315512 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315519 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315525 | orchestrator | 2025-09-17 16:10:15.315531 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-17 16:10:15.315537 | orchestrator | Wednesday 17 September 2025 16:09:35 +0000 (0:00:00.964) 0:05:23.386 *** 2025-09-17 16:10:15.315543 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315549 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315556 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315562 | orchestrator | 2025-09-17 16:10:15.315573 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-17 16:10:15.315579 | orchestrator | Wednesday 17 September 2025 16:09:35 +0000 (0:00:00.350) 0:05:23.736 *** 2025-09-17 16:10:15.315585 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315591 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315597 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315603 | orchestrator | 2025-09-17 16:10:15.315610 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-17 16:10:15.315616 | orchestrator | Wednesday 17 September 2025 16:09:36 +0000 (0:00:00.950) 0:05:24.687 *** 2025-09-17 16:10:15.315622 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315628 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315634 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315640 | orchestrator | 2025-09-17 16:10:15.315646 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-17 16:10:15.315653 | orchestrator | Wednesday 17 September 2025 16:09:37 +0000 (0:00:00.925) 0:05:25.613 *** 2025-09-17 16:10:15.315659 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315665 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315671 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315677 | orchestrator | 2025-09-17 16:10:15.315683 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-17 16:10:15.315689 | orchestrator | Wednesday 17 September 2025 16:09:39 +0000 (0:00:01.236) 0:05:26.850 *** 2025-09-17 16:10:15.315695 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.315701 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.315707 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.315713 | orchestrator | 2025-09-17 16:10:15.315719 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-17 16:10:15.315725 | orchestrator | Wednesday 17 September 2025 16:09:43 +0000 (0:00:04.636) 0:05:31.487 *** 2025-09-17 16:10:15.315731 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315737 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315743 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315749 | orchestrator | 2025-09-17 16:10:15.315755 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-17 16:10:15.315761 | orchestrator | Wednesday 17 September 2025 16:09:46 +0000 (0:00:02.902) 0:05:34.389 *** 2025-09-17 16:10:15.315768 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.315774 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.315780 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.315786 | orchestrator | 2025-09-17 16:10:15.315792 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-17 16:10:15.315798 | orchestrator | Wednesday 17 September 2025 16:09:59 +0000 (0:00:13.262) 0:05:47.652 *** 2025-09-17 16:10:15.315804 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.315810 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.315816 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.315822 | orchestrator | 2025-09-17 16:10:15.315828 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-17 16:10:15.315838 | orchestrator | Wednesday 17 September 2025 16:10:00 +0000 (0:00:00.745) 0:05:48.397 *** 2025-09-17 16:10:15.315845 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:10:15.315851 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:10:15.315857 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:10:15.315863 | orchestrator | 2025-09-17 16:10:15.315869 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-17 16:10:15.315875 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:09.406) 0:05:57.804 *** 2025-09-17 16:10:15.315881 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315887 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315893 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315899 | orchestrator | 2025-09-17 16:10:15.315905 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-17 16:10:15.315911 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:00.322) 0:05:58.126 *** 2025-09-17 16:10:15.315917 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315923 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315929 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315935 | orchestrator | 2025-09-17 16:10:15.315941 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-17 16:10:15.315948 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:00.312) 0:05:58.438 *** 2025-09-17 16:10:15.315954 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.315960 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.315969 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.315976 | orchestrator | 2025-09-17 16:10:15.315982 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-17 16:10:15.315988 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:00.291) 0:05:58.730 *** 2025-09-17 16:10:15.315994 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.316000 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.316006 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.316012 | orchestrator | 2025-09-17 16:10:15.316018 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-17 16:10:15.316024 | orchestrator | Wednesday 17 September 2025 16:10:11 +0000 (0:00:00.552) 0:05:59.283 *** 2025-09-17 16:10:15.316030 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.316036 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.316042 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.316048 | orchestrator | 2025-09-17 16:10:15.316054 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-17 16:10:15.316060 | orchestrator | Wednesday 17 September 2025 16:10:11 +0000 (0:00:00.340) 0:05:59.623 *** 2025-09-17 16:10:15.316066 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:10:15.316072 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:10:15.316078 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:10:15.316084 | orchestrator | 2025-09-17 16:10:15.316090 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-17 16:10:15.316096 | orchestrator | Wednesday 17 September 2025 16:10:12 +0000 (0:00:00.303) 0:05:59.926 *** 2025-09-17 16:10:15.316102 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.316109 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.316115 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.316120 | orchestrator | 2025-09-17 16:10:15.316127 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-17 16:10:15.316136 | orchestrator | Wednesday 17 September 2025 16:10:13 +0000 (0:00:01.167) 0:06:01.094 *** 2025-09-17 16:10:15.316142 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:10:15.316148 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:10:15.316154 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:10:15.316160 | orchestrator | 2025-09-17 16:10:15.316166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:10:15.316172 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 16:10:15.316194 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 16:10:15.316200 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-17 16:10:15.316207 | orchestrator | 2025-09-17 16:10:15.316213 | orchestrator | 2025-09-17 16:10:15.316219 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:10:15.316225 | orchestrator | Wednesday 17 September 2025 16:10:14 +0000 (0:00:01.116) 0:06:02.210 *** 2025-09-17 16:10:15.316231 | orchestrator | =============================================================================== 2025-09-17 16:10:15.316238 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.26s 2025-09-17 16:10:15.316244 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.41s 2025-09-17 16:10:15.316250 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.49s 2025-09-17 16:10:15.316256 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.26s 2025-09-17 16:10:15.316262 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.07s 2025-09-17 16:10:15.316268 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.64s 2025-09-17 16:10:15.316274 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.53s 2025-09-17 16:10:15.316280 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.51s 2025-09-17 16:10:15.316286 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.32s 2025-09-17 16:10:15.316292 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.23s 2025-09-17 16:10:15.316298 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.09s 2025-09-17 16:10:15.316304 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.07s 2025-09-17 16:10:15.316310 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.04s 2025-09-17 16:10:15.316316 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.01s 2025-09-17 16:10:15.316322 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.92s 2025-09-17 16:10:15.316328 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.86s 2025-09-17 16:10:15.316334 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.78s 2025-09-17 16:10:15.316340 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.75s 2025-09-17 16:10:15.316346 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.63s 2025-09-17 16:10:15.316352 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.58s 2025-09-17 16:10:18.343845 | orchestrator | 2025-09-17 16:10:18 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:18.344224 | orchestrator | 2025-09-17 16:10:18 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:18.346257 | orchestrator | 2025-09-17 16:10:18 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:18.346306 | orchestrator | 2025-09-17 16:10:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:21.380801 | orchestrator | 2025-09-17 16:10:21 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:21.382409 | orchestrator | 2025-09-17 16:10:21 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:21.383073 | orchestrator | 2025-09-17 16:10:21 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:21.383117 | orchestrator | 2025-09-17 16:10:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:24.426170 | orchestrator | 2025-09-17 16:10:24 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:24.427429 | orchestrator | 2025-09-17 16:10:24 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:24.428260 | orchestrator | 2025-09-17 16:10:24 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:24.428300 | orchestrator | 2025-09-17 16:10:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:27.481995 | orchestrator | 2025-09-17 16:10:27 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:27.482122 | orchestrator | 2025-09-17 16:10:27 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:27.483426 | orchestrator | 2025-09-17 16:10:27 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:27.483753 | orchestrator | 2025-09-17 16:10:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:30.522643 | orchestrator | 2025-09-17 16:10:30 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:30.523116 | orchestrator | 2025-09-17 16:10:30 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:30.523975 | orchestrator | 2025-09-17 16:10:30 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:30.523996 | orchestrator | 2025-09-17 16:10:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:33.553304 | orchestrator | 2025-09-17 16:10:33 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:33.554574 | orchestrator | 2025-09-17 16:10:33 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:33.557269 | orchestrator | 2025-09-17 16:10:33 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:33.557291 | orchestrator | 2025-09-17 16:10:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:36.580986 | orchestrator | 2025-09-17 16:10:36 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:36.581264 | orchestrator | 2025-09-17 16:10:36 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:36.581925 | orchestrator | 2025-09-17 16:10:36 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:36.582125 | orchestrator | 2025-09-17 16:10:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:39.621171 | orchestrator | 2025-09-17 16:10:39 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:39.622486 | orchestrator | 2025-09-17 16:10:39 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:39.622515 | orchestrator | 2025-09-17 16:10:39 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:39.622528 | orchestrator | 2025-09-17 16:10:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:42.701852 | orchestrator | 2025-09-17 16:10:42 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:42.703653 | orchestrator | 2025-09-17 16:10:42 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:42.705692 | orchestrator | 2025-09-17 16:10:42 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:42.705734 | orchestrator | 2025-09-17 16:10:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:45.743298 | orchestrator | 2025-09-17 16:10:45 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:45.743903 | orchestrator | 2025-09-17 16:10:45 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:45.744925 | orchestrator | 2025-09-17 16:10:45 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:45.744946 | orchestrator | 2025-09-17 16:10:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:48.769412 | orchestrator | 2025-09-17 16:10:48 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:48.772052 | orchestrator | 2025-09-17 16:10:48 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:48.774266 | orchestrator | 2025-09-17 16:10:48 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:48.774333 | orchestrator | 2025-09-17 16:10:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:51.806892 | orchestrator | 2025-09-17 16:10:51 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:51.808122 | orchestrator | 2025-09-17 16:10:51 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:51.810367 | orchestrator | 2025-09-17 16:10:51 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:51.810426 | orchestrator | 2025-09-17 16:10:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:54.847161 | orchestrator | 2025-09-17 16:10:54 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:54.849093 | orchestrator | 2025-09-17 16:10:54 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:54.850304 | orchestrator | 2025-09-17 16:10:54 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:54.850349 | orchestrator | 2025-09-17 16:10:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:10:57.897233 | orchestrator | 2025-09-17 16:10:57 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:10:57.897888 | orchestrator | 2025-09-17 16:10:57 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:10:57.899648 | orchestrator | 2025-09-17 16:10:57 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:10:57.899671 | orchestrator | 2025-09-17 16:10:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:00.946576 | orchestrator | 2025-09-17 16:11:00 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:00.947433 | orchestrator | 2025-09-17 16:11:00 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:00.949023 | orchestrator | 2025-09-17 16:11:00 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:00.949048 | orchestrator | 2025-09-17 16:11:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:03.982449 | orchestrator | 2025-09-17 16:11:03 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:03.983162 | orchestrator | 2025-09-17 16:11:03 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:03.984515 | orchestrator | 2025-09-17 16:11:03 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:03.984666 | orchestrator | 2025-09-17 16:11:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:07.022583 | orchestrator | 2025-09-17 16:11:07 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:07.023752 | orchestrator | 2025-09-17 16:11:07 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:07.025658 | orchestrator | 2025-09-17 16:11:07 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:07.025691 | orchestrator | 2025-09-17 16:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:10.067483 | orchestrator | 2025-09-17 16:11:10 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:10.068847 | orchestrator | 2025-09-17 16:11:10 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:10.070468 | orchestrator | 2025-09-17 16:11:10 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:10.070491 | orchestrator | 2025-09-17 16:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:13.112722 | orchestrator | 2025-09-17 16:11:13 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:13.114852 | orchestrator | 2025-09-17 16:11:13 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:13.117350 | orchestrator | 2025-09-17 16:11:13 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:13.117412 | orchestrator | 2025-09-17 16:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:16.160413 | orchestrator | 2025-09-17 16:11:16 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:16.161915 | orchestrator | 2025-09-17 16:11:16 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:16.163959 | orchestrator | 2025-09-17 16:11:16 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:16.164012 | orchestrator | 2025-09-17 16:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:19.214499 | orchestrator | 2025-09-17 16:11:19 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:19.215145 | orchestrator | 2025-09-17 16:11:19 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:19.219268 | orchestrator | 2025-09-17 16:11:19 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:19.219705 | orchestrator | 2025-09-17 16:11:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:22.264381 | orchestrator | 2025-09-17 16:11:22 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:22.266378 | orchestrator | 2025-09-17 16:11:22 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:22.268305 | orchestrator | 2025-09-17 16:11:22 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:22.268715 | orchestrator | 2025-09-17 16:11:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:25.304138 | orchestrator | 2025-09-17 16:11:25 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:25.305352 | orchestrator | 2025-09-17 16:11:25 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:25.308094 | orchestrator | 2025-09-17 16:11:25 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:25.308872 | orchestrator | 2025-09-17 16:11:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:28.345165 | orchestrator | 2025-09-17 16:11:28 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:28.347756 | orchestrator | 2025-09-17 16:11:28 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:28.348436 | orchestrator | 2025-09-17 16:11:28 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:28.348458 | orchestrator | 2025-09-17 16:11:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:31.387999 | orchestrator | 2025-09-17 16:11:31 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:31.389909 | orchestrator | 2025-09-17 16:11:31 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:31.391660 | orchestrator | 2025-09-17 16:11:31 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:31.392061 | orchestrator | 2025-09-17 16:11:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:34.440649 | orchestrator | 2025-09-17 16:11:34 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:34.445972 | orchestrator | 2025-09-17 16:11:34 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:34.452885 | orchestrator | 2025-09-17 16:11:34 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:34.453261 | orchestrator | 2025-09-17 16:11:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:37.485788 | orchestrator | 2025-09-17 16:11:37 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:37.489189 | orchestrator | 2025-09-17 16:11:37 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:37.489700 | orchestrator | 2025-09-17 16:11:37 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:37.489730 | orchestrator | 2025-09-17 16:11:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:40.538893 | orchestrator | 2025-09-17 16:11:40 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:40.540239 | orchestrator | 2025-09-17 16:11:40 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:40.542268 | orchestrator | 2025-09-17 16:11:40 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:40.542306 | orchestrator | 2025-09-17 16:11:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:43.589468 | orchestrator | 2025-09-17 16:11:43 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:43.591036 | orchestrator | 2025-09-17 16:11:43 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:43.592642 | orchestrator | 2025-09-17 16:11:43 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:43.593222 | orchestrator | 2025-09-17 16:11:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:46.632233 | orchestrator | 2025-09-17 16:11:46 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:46.633774 | orchestrator | 2025-09-17 16:11:46 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:46.637103 | orchestrator | 2025-09-17 16:11:46 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:46.637372 | orchestrator | 2025-09-17 16:11:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:49.681022 | orchestrator | 2025-09-17 16:11:49 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:49.682379 | orchestrator | 2025-09-17 16:11:49 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:49.684126 | orchestrator | 2025-09-17 16:11:49 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:49.684152 | orchestrator | 2025-09-17 16:11:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:52.717597 | orchestrator | 2025-09-17 16:11:52 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:52.719085 | orchestrator | 2025-09-17 16:11:52 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:52.720368 | orchestrator | 2025-09-17 16:11:52 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:52.720505 | orchestrator | 2025-09-17 16:11:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:55.755976 | orchestrator | 2025-09-17 16:11:55 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:55.757443 | orchestrator | 2025-09-17 16:11:55 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:55.758787 | orchestrator | 2025-09-17 16:11:55 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:55.758810 | orchestrator | 2025-09-17 16:11:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:11:58.800377 | orchestrator | 2025-09-17 16:11:58 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:11:58.800708 | orchestrator | 2025-09-17 16:11:58 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:11:58.802736 | orchestrator | 2025-09-17 16:11:58 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:11:58.802760 | orchestrator | 2025-09-17 16:11:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:01.843146 | orchestrator | 2025-09-17 16:12:01 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:01.844307 | orchestrator | 2025-09-17 16:12:01 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:01.846169 | orchestrator | 2025-09-17 16:12:01 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:01.846272 | orchestrator | 2025-09-17 16:12:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:04.891517 | orchestrator | 2025-09-17 16:12:04 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:04.893462 | orchestrator | 2025-09-17 16:12:04 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:04.895939 | orchestrator | 2025-09-17 16:12:04 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:04.896092 | orchestrator | 2025-09-17 16:12:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:07.940239 | orchestrator | 2025-09-17 16:12:07 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:07.941993 | orchestrator | 2025-09-17 16:12:07 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:07.944099 | orchestrator | 2025-09-17 16:12:07 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:07.944181 | orchestrator | 2025-09-17 16:12:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:10.986570 | orchestrator | 2025-09-17 16:12:10 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:10.988035 | orchestrator | 2025-09-17 16:12:10 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:10.990138 | orchestrator | 2025-09-17 16:12:10 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:10.990222 | orchestrator | 2025-09-17 16:12:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:14.033395 | orchestrator | 2025-09-17 16:12:14 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:14.036681 | orchestrator | 2025-09-17 16:12:14 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:14.039424 | orchestrator | 2025-09-17 16:12:14 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:14.039457 | orchestrator | 2025-09-17 16:12:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:17.102339 | orchestrator | 2025-09-17 16:12:17 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:17.104285 | orchestrator | 2025-09-17 16:12:17 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:17.105247 | orchestrator | 2025-09-17 16:12:17 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:17.105543 | orchestrator | 2025-09-17 16:12:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:20.148675 | orchestrator | 2025-09-17 16:12:20 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:20.148775 | orchestrator | 2025-09-17 16:12:20 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:20.150181 | orchestrator | 2025-09-17 16:12:20 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:20.150227 | orchestrator | 2025-09-17 16:12:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:23.201555 | orchestrator | 2025-09-17 16:12:23 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:23.203643 | orchestrator | 2025-09-17 16:12:23 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:23.206074 | orchestrator | 2025-09-17 16:12:23 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:23.206101 | orchestrator | 2025-09-17 16:12:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:26.253469 | orchestrator | 2025-09-17 16:12:26 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:26.254755 | orchestrator | 2025-09-17 16:12:26 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:26.256794 | orchestrator | 2025-09-17 16:12:26 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:26.256819 | orchestrator | 2025-09-17 16:12:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:29.296808 | orchestrator | 2025-09-17 16:12:29 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:29.298623 | orchestrator | 2025-09-17 16:12:29 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state STARTED 2025-09-17 16:12:29.301504 | orchestrator | 2025-09-17 16:12:29 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:29.301850 | orchestrator | 2025-09-17 16:12:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:32.349190 | orchestrator | 2025-09-17 16:12:32 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:32.354943 | orchestrator | 2025-09-17 16:12:32 | INFO  | Task 8f8ce7ac-2faf-43ec-ac7f-f41ab9b03f1e is in state SUCCESS 2025-09-17 16:12:32.356878 | orchestrator | 2025-09-17 16:12:32.356914 | orchestrator | 2025-09-17 16:12:32.356927 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-17 16:12:32.357001 | orchestrator | 2025-09-17 16:12:32.357100 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-17 16:12:32.357114 | orchestrator | Wednesday 17 September 2025 16:01:14 +0000 (0:00:00.755) 0:00:00.755 *** 2025-09-17 16:12:32.357126 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.357186 | orchestrator | 2025-09-17 16:12:32.357244 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-17 16:12:32.357256 | orchestrator | Wednesday 17 September 2025 16:01:15 +0000 (0:00:01.051) 0:00:01.807 *** 2025-09-17 16:12:32.357267 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357278 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357289 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.357299 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357368 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357380 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.357390 | orchestrator | 2025-09-17 16:12:32.357401 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-17 16:12:32.357412 | orchestrator | Wednesday 17 September 2025 16:01:17 +0000 (0:00:01.756) 0:00:03.563 *** 2025-09-17 16:12:32.357422 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357433 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357457 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357479 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357490 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.357500 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.357511 | orchestrator | 2025-09-17 16:12:32.357521 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-17 16:12:32.357532 | orchestrator | Wednesday 17 September 2025 16:01:18 +0000 (0:00:00.659) 0:00:04.222 *** 2025-09-17 16:12:32.357543 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357553 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357563 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357574 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357584 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.357594 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.357605 | orchestrator | 2025-09-17 16:12:32.357615 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-17 16:12:32.357626 | orchestrator | Wednesday 17 September 2025 16:01:19 +0000 (0:00:00.963) 0:00:05.185 *** 2025-09-17 16:12:32.357636 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357647 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357672 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357708 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357720 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.357730 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.357766 | orchestrator | 2025-09-17 16:12:32.357777 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-17 16:12:32.357788 | orchestrator | Wednesday 17 September 2025 16:01:19 +0000 (0:00:00.813) 0:00:05.998 *** 2025-09-17 16:12:32.357798 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357860 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357872 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357882 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357892 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.357903 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.357913 | orchestrator | 2025-09-17 16:12:32.357924 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-17 16:12:32.357935 | orchestrator | Wednesday 17 September 2025 16:01:20 +0000 (0:00:00.588) 0:00:06.586 *** 2025-09-17 16:12:32.357945 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.357956 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.357966 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.357976 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.357987 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.358007 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.358069 | orchestrator | 2025-09-17 16:12:32.358084 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-17 16:12:32.358095 | orchestrator | Wednesday 17 September 2025 16:01:21 +0000 (0:00:00.997) 0:00:07.584 *** 2025-09-17 16:12:32.358106 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.358117 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.358128 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.358138 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.358149 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.358159 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.358171 | orchestrator | 2025-09-17 16:12:32.358181 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-17 16:12:32.358291 | orchestrator | Wednesday 17 September 2025 16:01:22 +0000 (0:00:00.869) 0:00:08.454 *** 2025-09-17 16:12:32.358317 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.358327 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.358369 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.358381 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.358391 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.358402 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.358413 | orchestrator | 2025-09-17 16:12:32.358423 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-17 16:12:32.358434 | orchestrator | Wednesday 17 September 2025 16:01:23 +0000 (0:00:01.218) 0:00:09.672 *** 2025-09-17 16:12:32.358445 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.358456 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.358467 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.358477 | orchestrator | 2025-09-17 16:12:32.358488 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-17 16:12:32.358499 | orchestrator | Wednesday 17 September 2025 16:01:24 +0000 (0:00:00.668) 0:00:10.341 *** 2025-09-17 16:12:32.358510 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.358520 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.358531 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.358541 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.358552 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.358562 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.358573 | orchestrator | 2025-09-17 16:12:32.358597 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-17 16:12:32.358609 | orchestrator | Wednesday 17 September 2025 16:01:25 +0000 (0:00:01.281) 0:00:11.623 *** 2025-09-17 16:12:32.358620 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.358630 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.358641 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.358651 | orchestrator | 2025-09-17 16:12:32.358662 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-17 16:12:32.358673 | orchestrator | Wednesday 17 September 2025 16:01:28 +0000 (0:00:02.974) 0:00:14.597 *** 2025-09-17 16:12:32.358683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.358694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.358704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.358715 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.358725 | orchestrator | 2025-09-17 16:12:32.358736 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-17 16:12:32.358747 | orchestrator | Wednesday 17 September 2025 16:01:29 +0000 (0:00:00.991) 0:00:15.589 *** 2025-09-17 16:12:32.358760 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358796 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358807 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.358817 | orchestrator | 2025-09-17 16:12:32.358835 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-17 16:12:32.358846 | orchestrator | Wednesday 17 September 2025 16:01:30 +0000 (0:00:01.103) 0:00:16.693 *** 2025-09-17 16:12:32.358859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358872 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358883 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358894 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.358905 | orchestrator | 2025-09-17 16:12:32.358916 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-17 16:12:32.358927 | orchestrator | Wednesday 17 September 2025 16:01:31 +0000 (0:00:00.911) 0:00:17.605 *** 2025-09-17 16:12:32.358940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-17 16:01:26.210170', 'end': '2025-09-17 16:01:26.526559', 'delta': '0:00:00.316389', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358973 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-17 16:01:27.211755', 'end': '2025-09-17 16:01:27.506314', 'delta': '0:00:00.294559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.358994 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-17 16:01:28.076824', 'end': '2025-09-17 16:01:28.382289', 'delta': '0:00:00.305465', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.359006 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359017 | orchestrator | 2025-09-17 16:12:32.359184 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-17 16:12:32.359228 | orchestrator | Wednesday 17 September 2025 16:01:31 +0000 (0:00:00.305) 0:00:17.910 *** 2025-09-17 16:12:32.359240 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.359251 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.359261 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.359278 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.359289 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.359300 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.359310 | orchestrator | 2025-09-17 16:12:32.359321 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-17 16:12:32.359364 | orchestrator | Wednesday 17 September 2025 16:01:33 +0000 (0:00:01.964) 0:00:19.875 *** 2025-09-17 16:12:32.359377 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.359387 | orchestrator | 2025-09-17 16:12:32.359398 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-17 16:12:32.359409 | orchestrator | Wednesday 17 September 2025 16:01:35 +0000 (0:00:01.302) 0:00:21.177 *** 2025-09-17 16:12:32.359419 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359430 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.359440 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.359451 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.359462 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.359472 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.359483 | orchestrator | 2025-09-17 16:12:32.359493 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-17 16:12:32.359504 | orchestrator | Wednesday 17 September 2025 16:01:37 +0000 (0:00:01.965) 0:00:23.143 *** 2025-09-17 16:12:32.359515 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359540 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.359550 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.359561 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.359571 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.359618 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.359629 | orchestrator | 2025-09-17 16:12:32.359640 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 16:12:32.359651 | orchestrator | Wednesday 17 September 2025 16:01:38 +0000 (0:00:01.124) 0:00:24.267 *** 2025-09-17 16:12:32.359661 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359689 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.359700 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.359711 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.359721 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.359731 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.359742 | orchestrator | 2025-09-17 16:12:32.359753 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-17 16:12:32.359763 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:01.049) 0:00:25.317 *** 2025-09-17 16:12:32.359774 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359791 | orchestrator | 2025-09-17 16:12:32.359802 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-17 16:12:32.359813 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:00.102) 0:00:25.419 *** 2025-09-17 16:12:32.359823 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359834 | orchestrator | 2025-09-17 16:12:32.359844 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 16:12:32.359855 | orchestrator | Wednesday 17 September 2025 16:01:39 +0000 (0:00:00.295) 0:00:25.715 *** 2025-09-17 16:12:32.359866 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.359908 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.359920 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.359930 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.359941 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.359952 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.359973 | orchestrator | 2025-09-17 16:12:32.359985 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-17 16:12:32.360005 | orchestrator | Wednesday 17 September 2025 16:01:40 +0000 (0:00:00.665) 0:00:26.380 *** 2025-09-17 16:12:32.360129 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.360141 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.360152 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.360191 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.360343 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.360357 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.360367 | orchestrator | 2025-09-17 16:12:32.360378 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-17 16:12:32.360389 | orchestrator | Wednesday 17 September 2025 16:01:41 +0000 (0:00:00.714) 0:00:27.095 *** 2025-09-17 16:12:32.360399 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.360410 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.360420 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.360431 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.360441 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.360452 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.360462 | orchestrator | 2025-09-17 16:12:32.360473 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-17 16:12:32.360490 | orchestrator | Wednesday 17 September 2025 16:01:41 +0000 (0:00:00.701) 0:00:27.797 *** 2025-09-17 16:12:32.360509 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.360528 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.360546 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.360565 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.360577 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.360587 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.360597 | orchestrator | 2025-09-17 16:12:32.360608 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-17 16:12:32.360619 | orchestrator | Wednesday 17 September 2025 16:01:42 +0000 (0:00:00.738) 0:00:28.536 *** 2025-09-17 16:12:32.360629 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.360640 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.360685 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.360737 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.360804 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.360816 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.360827 | orchestrator | 2025-09-17 16:12:32.360837 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-17 16:12:32.360886 | orchestrator | Wednesday 17 September 2025 16:01:43 +0000 (0:00:00.702) 0:00:29.238 *** 2025-09-17 16:12:32.360899 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.360910 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.360920 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.360986 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.361008 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.361019 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.361029 | orchestrator | 2025-09-17 16:12:32.361040 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-17 16:12:32.361051 | orchestrator | Wednesday 17 September 2025 16:01:44 +0000 (0:00:01.241) 0:00:30.479 *** 2025-09-17 16:12:32.361061 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.361072 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.361082 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.361093 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.361103 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.361114 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.361124 | orchestrator | 2025-09-17 16:12:32.361135 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-17 16:12:32.361146 | orchestrator | Wednesday 17 September 2025 16:01:45 +0000 (0:00:00.694) 0:00:31.174 *** 2025-09-17 16:12:32.361158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361419 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.361431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part1', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part14', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part15', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part16', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361638 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.361649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79', 'dm-uuid-LVM-6RyMlMdjeOp7j1vRqfxRAJS3ApVXn13X2Vfadb7vhG6ge7Y1r6yBKNgM18gGcW0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361667 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.361678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c', 'dm-uuid-LVM-vIyaoAaQU4BLTggnPtIfxsEZD3fWK7cz6KBYfcsu0o52AotTNOuzw91MCFv9KHzh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4', 'dm-uuid-LVM-PdYNM3UuBlXGJqwN3in7M0c9PsPKTYErhy6wKZPnL1bwjK9oynUdSPDssfTgaOFP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a', 'dm-uuid-LVM-n16DXE8IHM2auFI8fe4U37eK6xVMKQZubvLjSeU2AHjqqqODc6Exx2jcALvDasBJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.361933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lZa7Rh-i3Rn-xMzW-Vlv1-fNcw-aM2A-R4MUAR', 'scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2', 'scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.361991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZtIiw-R8yr-uRlx-X2bF-nyyO-Cudf-jRf67i', 'scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560', 'scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RWWDaA-yPOu-TiwM-4vDa-dfQY-ugWA-9ZlceI', 'scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53', 'scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cs4drC-MDE3-7Bth-4yzp-cd2h-cv6K-SUr5e3', 'scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae', 'scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d', 'scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372', 'scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362116 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.362127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362138 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.362149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133', 'dm-uuid-LVM-X4M3ygsjIklutR4Bq0CdRZnkK8fpGU3dCbXr4lylFfSoFJ6SpSoOwzsfV30i5M00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3', 'dm-uuid-LVM-sj6dpbc449zUgbdRNYvEkSmp7ingtmE2YrK5U3Jm1Y4fwH5Jc0803iMSz1cWO7kv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:12:32.362319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cohLej-sudo-7eKj-PPrS-63UL-P3Oi-F37loG', 'scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16', 'scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-254YoQ-Eg1l-2K9c-2pur-dTIZ-nJKU-cdfDuc', 'scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3', 'scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae', 'scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:12:32.362396 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.362407 | orchestrator | 2025-09-17 16:12:32.362418 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-17 16:12:32.362429 | orchestrator | Wednesday 17 September 2025 16:01:46 +0000 (0:00:01.222) 0:00:32.397 *** 2025-09-17 16:12:32.362441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362468 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362479 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362491 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362511 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362541 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362559 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part1', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part14', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part15', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part16', 'scsi-SQEMU_QEMU_HARDDISK_e7e177ac-52b2-47f4-af69-5c6cfcbe9873-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362595 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362623 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362634 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362646 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362663 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.362674 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.363719 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363769 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_7cfeb536-2f17-4290-8db9-eae7b72314bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363791 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363886 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363896 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363911 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363928 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.363938 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364006 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364020 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.364031 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364050 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part1', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part14', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part15', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part16', 'scsi-SQEMU_QEMU_HARDDISK_0769a619-0eb8-44c9-9217-96fd06089110-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364069 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79', 'dm-uuid-LVM-6RyMlMdjeOp7j1vRqfxRAJS3ApVXn13X2Vfadb7vhG6ge7Y1r6yBKNgM18gGcW0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c', 'dm-uuid-LVM-vIyaoAaQU4BLTggnPtIfxsEZD3fWK7cz6KBYfcsu0o52AotTNOuzw91MCFv9KHzh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364229 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.364247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4', 'dm-uuid-LVM-PdYNM3UuBlXGJqwN3in7M0c9PsPKTYErhy6wKZPnL1bwjK9oynUdSPDssfTgaOFP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a', 'dm-uuid-LVM-n16DXE8IHM2auFI8fe4U37eK6xVMKQZubvLjSeU2AHjqqqODc6Exx2jcALvDasBJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133', 'dm-uuid-LVM-X4M3ygsjIklutR4Bq0CdRZnkK8fpGU3dCbXr4lylFfSoFJ6SpSoOwzsfV30i5M00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3', 'dm-uuid-LVM-sj6dpbc449zUgbdRNYvEkSmp7ingtmE2YrK5U3Jm1Y4fwH5Jc0803iMSz1cWO7kv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364704 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.364978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZtIiw-R8yr-uRlx-X2bF-nyyO-Cudf-jRf67i', 'scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560', 'scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cohLej-sudo-7eKj-PPrS-63UL-P3Oi-F37loG', 'scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16', 'scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365130 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-254YoQ-Eg1l-2K9c-2pur-dTIZ-nJKU-cdfDuc', 'scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3', 'scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae', 'scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RWWDaA-yPOu-TiwM-4vDa-dfQY-ugWA-9ZlceI', 'scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53', 'scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365290 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.365409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372', 'scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365488 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365507 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.365602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lZa7Rh-i3Rn-xMzW-Vlv1-fNcw-aM2A-R4MUAR', 'scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2', 'scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365643 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cs4drC-MDE3-7Bth-4yzp-cd2h-cv6K-SUr5e3', 'scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae', 'scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d', 'scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365663 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:12:32.365674 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.365704 | orchestrator | 2025-09-17 16:12:32.365714 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-17 16:12:32.365725 | orchestrator | Wednesday 17 September 2025 16:01:48 +0000 (0:00:02.094) 0:00:34.491 *** 2025-09-17 16:12:32.365734 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.365744 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.365754 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.365829 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.365842 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.365852 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.365861 | orchestrator | 2025-09-17 16:12:32.365871 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-17 16:12:32.365881 | orchestrator | Wednesday 17 September 2025 16:01:50 +0000 (0:00:02.471) 0:00:36.962 *** 2025-09-17 16:12:32.365890 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.365908 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.365918 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.365927 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.365936 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.365946 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.365955 | orchestrator | 2025-09-17 16:12:32.365965 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 16:12:32.365974 | orchestrator | Wednesday 17 September 2025 16:01:51 +0000 (0:00:00.829) 0:00:37.791 *** 2025-09-17 16:12:32.365984 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.365993 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.366003 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.366012 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.366052 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.366062 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.366072 | orchestrator | 2025-09-17 16:12:32.366081 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 16:12:32.366091 | orchestrator | Wednesday 17 September 2025 16:01:52 +0000 (0:00:01.023) 0:00:38.815 *** 2025-09-17 16:12:32.366101 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.366110 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.366138 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.366148 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.366158 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.366167 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.366177 | orchestrator | 2025-09-17 16:12:32.366186 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 16:12:32.366222 | orchestrator | Wednesday 17 September 2025 16:01:53 +0000 (0:00:01.072) 0:00:39.887 *** 2025-09-17 16:12:32.366240 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.366250 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.366260 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.366269 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.366279 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.366288 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.366297 | orchestrator | 2025-09-17 16:12:32.366307 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 16:12:32.366322 | orchestrator | Wednesday 17 September 2025 16:01:54 +0000 (0:00:01.004) 0:00:40.892 *** 2025-09-17 16:12:32.366332 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.366341 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.366351 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.366360 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.366377 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.366393 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.366409 | orchestrator | 2025-09-17 16:12:32.366424 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-17 16:12:32.366441 | orchestrator | Wednesday 17 September 2025 16:01:55 +0000 (0:00:00.732) 0:00:41.624 *** 2025-09-17 16:12:32.366458 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.366474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-17 16:12:32.366488 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-17 16:12:32.366500 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-17 16:12:32.366510 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-17 16:12:32.366521 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-17 16:12:32.366532 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-17 16:12:32.366542 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-17 16:12:32.366553 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-17 16:12:32.366563 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-17 16:12:32.366573 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-17 16:12:32.366593 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-17 16:12:32.366604 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-17 16:12:32.366614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-17 16:12:32.366625 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-17 16:12:32.366635 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-17 16:12:32.366645 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-17 16:12:32.366656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-17 16:12:32.366666 | orchestrator | 2025-09-17 16:12:32.366677 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-17 16:12:32.366688 | orchestrator | Wednesday 17 September 2025 16:01:58 +0000 (0:00:03.219) 0:00:44.843 *** 2025-09-17 16:12:32.366699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.366710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.366720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.366731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-17 16:12:32.366742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-17 16:12:32.366753 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-17 16:12:32.366764 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.366774 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-17 16:12:32.366785 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-17 16:12:32.366796 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-17 16:12:32.366807 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.366817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 16:12:32.366911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 16:12:32.366926 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.366935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 16:12:32.366945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 16:12:32.366954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 16:12:32.366964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 16:12:32.366973 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.366982 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.366992 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 16:12:32.367001 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 16:12:32.367011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 16:12:32.367020 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.367029 | orchestrator | 2025-09-17 16:12:32.367039 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-17 16:12:32.367048 | orchestrator | Wednesday 17 September 2025 16:01:59 +0000 (0:00:00.739) 0:00:45.582 *** 2025-09-17 16:12:32.367058 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.367067 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.367076 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.367086 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.367096 | orchestrator | 2025-09-17 16:12:32.367106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 16:12:32.367116 | orchestrator | Wednesday 17 September 2025 16:02:00 +0000 (0:00:01.123) 0:00:46.706 *** 2025-09-17 16:12:32.367125 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367135 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.367144 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.367160 | orchestrator | 2025-09-17 16:12:32.367170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 16:12:32.367179 | orchestrator | Wednesday 17 September 2025 16:02:01 +0000 (0:00:00.503) 0:00:47.210 *** 2025-09-17 16:12:32.367189 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.367256 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367267 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.367277 | orchestrator | 2025-09-17 16:12:32.367287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 16:12:32.367296 | orchestrator | Wednesday 17 September 2025 16:02:01 +0000 (0:00:00.647) 0:00:47.857 *** 2025-09-17 16:12:32.367306 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367315 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.367325 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.367334 | orchestrator | 2025-09-17 16:12:32.367344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 16:12:32.367354 | orchestrator | Wednesday 17 September 2025 16:02:02 +0000 (0:00:00.440) 0:00:48.298 *** 2025-09-17 16:12:32.367365 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.367381 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.367448 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.367468 | orchestrator | 2025-09-17 16:12:32.367481 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 16:12:32.367491 | orchestrator | Wednesday 17 September 2025 16:02:02 +0000 (0:00:00.664) 0:00:48.963 *** 2025-09-17 16:12:32.367501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.367510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.367520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.367529 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367539 | orchestrator | 2025-09-17 16:12:32.367548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 16:12:32.367558 | orchestrator | Wednesday 17 September 2025 16:02:03 +0000 (0:00:00.345) 0:00:49.309 *** 2025-09-17 16:12:32.367567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.367577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.367586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.367595 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367605 | orchestrator | 2025-09-17 16:12:32.367614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 16:12:32.367623 | orchestrator | Wednesday 17 September 2025 16:02:03 +0000 (0:00:00.577) 0:00:49.886 *** 2025-09-17 16:12:32.367633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.367642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.367651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.367661 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.367670 | orchestrator | 2025-09-17 16:12:32.367679 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 16:12:32.367689 | orchestrator | Wednesday 17 September 2025 16:02:04 +0000 (0:00:00.713) 0:00:50.599 *** 2025-09-17 16:12:32.367698 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.367708 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.367717 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.367727 | orchestrator | 2025-09-17 16:12:32.367736 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 16:12:32.367746 | orchestrator | Wednesday 17 September 2025 16:02:05 +0000 (0:00:00.838) 0:00:51.437 *** 2025-09-17 16:12:32.367755 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 16:12:32.367765 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 16:12:32.367773 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 16:12:32.367781 | orchestrator | 2025-09-17 16:12:32.367789 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-17 16:12:32.367804 | orchestrator | Wednesday 17 September 2025 16:02:05 +0000 (0:00:00.646) 0:00:52.084 *** 2025-09-17 16:12:32.367843 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.367853 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.367861 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.367869 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-17 16:12:32.367877 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 16:12:32.367884 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 16:12:32.367892 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 16:12:32.367899 | orchestrator | 2025-09-17 16:12:32.367907 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-17 16:12:32.367915 | orchestrator | Wednesday 17 September 2025 16:02:06 +0000 (0:00:00.969) 0:00:53.054 *** 2025-09-17 16:12:32.367923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.367930 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.367938 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.367946 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-17 16:12:32.367953 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 16:12:32.367961 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 16:12:32.367969 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 16:12:32.367977 | orchestrator | 2025-09-17 16:12:32.367985 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.367992 | orchestrator | Wednesday 17 September 2025 16:02:08 +0000 (0:00:01.933) 0:00:54.987 *** 2025-09-17 16:12:32.368000 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.368009 | orchestrator | 2025-09-17 16:12:32.368021 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.368029 | orchestrator | Wednesday 17 September 2025 16:02:09 +0000 (0:00:01.037) 0:00:56.024 *** 2025-09-17 16:12:32.368037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.368045 | orchestrator | 2025-09-17 16:12:32.368052 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.368060 | orchestrator | Wednesday 17 September 2025 16:02:11 +0000 (0:00:01.247) 0:00:57.272 *** 2025-09-17 16:12:32.368068 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368075 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.368083 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368090 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.368098 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.368106 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368113 | orchestrator | 2025-09-17 16:12:32.368121 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.368129 | orchestrator | Wednesday 17 September 2025 16:02:12 +0000 (0:00:01.051) 0:00:58.323 *** 2025-09-17 16:12:32.368136 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368144 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368152 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368159 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368174 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368181 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368189 | orchestrator | 2025-09-17 16:12:32.368217 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.368227 | orchestrator | Wednesday 17 September 2025 16:02:13 +0000 (0:00:01.595) 0:00:59.919 *** 2025-09-17 16:12:32.368234 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368242 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368250 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368258 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368265 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368273 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368281 | orchestrator | 2025-09-17 16:12:32.368289 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.368296 | orchestrator | Wednesday 17 September 2025 16:02:15 +0000 (0:00:01.748) 0:01:01.668 *** 2025-09-17 16:12:32.368304 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368312 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368319 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368327 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368335 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368342 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368350 | orchestrator | 2025-09-17 16:12:32.368358 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.368368 | orchestrator | Wednesday 17 September 2025 16:02:16 +0000 (0:00:01.045) 0:01:02.714 *** 2025-09-17 16:12:32.368382 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.368395 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.368407 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368422 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368430 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.368438 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368446 | orchestrator | 2025-09-17 16:12:32.368454 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.368462 | orchestrator | Wednesday 17 September 2025 16:02:18 +0000 (0:00:01.497) 0:01:04.211 *** 2025-09-17 16:12:32.368496 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368505 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368513 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368521 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368528 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368536 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368544 | orchestrator | 2025-09-17 16:12:32.368551 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.368559 | orchestrator | Wednesday 17 September 2025 16:02:18 +0000 (0:00:00.767) 0:01:04.978 *** 2025-09-17 16:12:32.368567 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368575 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368582 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368590 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368598 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368605 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368613 | orchestrator | 2025-09-17 16:12:32.368621 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.368629 | orchestrator | Wednesday 17 September 2025 16:02:19 +0000 (0:00:01.031) 0:01:06.010 *** 2025-09-17 16:12:32.368636 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.368644 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.368652 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.368660 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368667 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368675 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368683 | orchestrator | 2025-09-17 16:12:32.368691 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.368706 | orchestrator | Wednesday 17 September 2025 16:02:21 +0000 (0:00:01.423) 0:01:07.433 *** 2025-09-17 16:12:32.368714 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.368722 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.368730 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.368737 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368745 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368752 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368760 | orchestrator | 2025-09-17 16:12:32.368767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.368775 | orchestrator | Wednesday 17 September 2025 16:02:22 +0000 (0:00:01.273) 0:01:08.707 *** 2025-09-17 16:12:32.368783 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368791 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368798 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368806 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368814 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368821 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368829 | orchestrator | 2025-09-17 16:12:32.368841 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.368849 | orchestrator | Wednesday 17 September 2025 16:02:23 +0000 (0:00:00.540) 0:01:09.247 *** 2025-09-17 16:12:32.368857 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.368864 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.368872 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.368880 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.368887 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.368895 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.368903 | orchestrator | 2025-09-17 16:12:32.368910 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.368918 | orchestrator | Wednesday 17 September 2025 16:02:23 +0000 (0:00:00.654) 0:01:09.902 *** 2025-09-17 16:12:32.368926 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.368933 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.368941 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.368949 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.368956 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.368964 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.368972 | orchestrator | 2025-09-17 16:12:32.368980 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.368987 | orchestrator | Wednesday 17 September 2025 16:02:24 +0000 (0:00:00.607) 0:01:10.509 *** 2025-09-17 16:12:32.368995 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369003 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369010 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369018 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.369026 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.369034 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.369041 | orchestrator | 2025-09-17 16:12:32.369049 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.369057 | orchestrator | Wednesday 17 September 2025 16:02:25 +0000 (0:00:00.858) 0:01:11.368 *** 2025-09-17 16:12:32.369065 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369072 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369080 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369088 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.369095 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.369103 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.369111 | orchestrator | 2025-09-17 16:12:32.369118 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.369126 | orchestrator | Wednesday 17 September 2025 16:02:25 +0000 (0:00:00.496) 0:01:11.864 *** 2025-09-17 16:12:32.369134 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369142 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369149 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369166 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.369173 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.369181 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.369189 | orchestrator | 2025-09-17 16:12:32.369215 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.369224 | orchestrator | Wednesday 17 September 2025 16:02:26 +0000 (0:00:00.731) 0:01:12.596 *** 2025-09-17 16:12:32.369231 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369239 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369247 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369254 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.369262 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.369270 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.369277 | orchestrator | 2025-09-17 16:12:32.369285 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.369317 | orchestrator | Wednesday 17 September 2025 16:02:27 +0000 (0:00:00.532) 0:01:13.128 *** 2025-09-17 16:12:32.369327 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.369335 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.369342 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.369350 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.369357 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.369370 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.369384 | orchestrator | 2025-09-17 16:12:32.369399 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.369412 | orchestrator | Wednesday 17 September 2025 16:02:27 +0000 (0:00:00.766) 0:01:13.894 *** 2025-09-17 16:12:32.369424 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.369438 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.369454 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.369468 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.369484 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.369497 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.369511 | orchestrator | 2025-09-17 16:12:32.369519 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.369527 | orchestrator | Wednesday 17 September 2025 16:02:28 +0000 (0:00:00.550) 0:01:14.445 *** 2025-09-17 16:12:32.369534 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.369542 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.369550 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.369557 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.369565 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.369572 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.369580 | orchestrator | 2025-09-17 16:12:32.369588 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-17 16:12:32.369596 | orchestrator | Wednesday 17 September 2025 16:02:29 +0000 (0:00:01.043) 0:01:15.488 *** 2025-09-17 16:12:32.369603 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.369611 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.369619 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.369626 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.369634 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.369642 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.369649 | orchestrator | 2025-09-17 16:12:32.369657 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-17 16:12:32.369665 | orchestrator | Wednesday 17 September 2025 16:02:31 +0000 (0:00:01.676) 0:01:17.164 *** 2025-09-17 16:12:32.369672 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.369680 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.369687 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.369695 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.369703 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.369710 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.369718 | orchestrator | 2025-09-17 16:12:32.369730 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-17 16:12:32.369746 | orchestrator | Wednesday 17 September 2025 16:02:33 +0000 (0:00:02.103) 0:01:19.268 *** 2025-09-17 16:12:32.369754 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.369762 | orchestrator | 2025-09-17 16:12:32.369770 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-17 16:12:32.369777 | orchestrator | Wednesday 17 September 2025 16:02:34 +0000 (0:00:01.045) 0:01:20.314 *** 2025-09-17 16:12:32.369785 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369793 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369801 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369808 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.369816 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.369823 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.369831 | orchestrator | 2025-09-17 16:12:32.369839 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-17 16:12:32.369846 | orchestrator | Wednesday 17 September 2025 16:02:34 +0000 (0:00:00.665) 0:01:20.979 *** 2025-09-17 16:12:32.369854 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.369862 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.369869 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.369877 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.369884 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.369892 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.369900 | orchestrator | 2025-09-17 16:12:32.369907 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-17 16:12:32.369915 | orchestrator | Wednesday 17 September 2025 16:02:35 +0000 (0:00:00.527) 0:01:21.506 *** 2025-09-17 16:12:32.369923 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.369930 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.369938 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.369946 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.369954 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.369961 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.369969 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.369977 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.369985 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.369992 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-17 16:12:32.370000 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.370008 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-17 16:12:32.370055 | orchestrator | 2025-09-17 16:12:32.370095 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-17 16:12:32.370105 | orchestrator | Wednesday 17 September 2025 16:02:36 +0000 (0:00:01.414) 0:01:22.921 *** 2025-09-17 16:12:32.370113 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.370120 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.370128 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.370136 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.370143 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.370151 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.370159 | orchestrator | 2025-09-17 16:12:32.370167 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-17 16:12:32.370181 | orchestrator | Wednesday 17 September 2025 16:02:37 +0000 (0:00:00.830) 0:01:23.751 *** 2025-09-17 16:12:32.370188 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370244 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370254 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370262 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370269 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370277 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370285 | orchestrator | 2025-09-17 16:12:32.370293 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-17 16:12:32.370300 | orchestrator | Wednesday 17 September 2025 16:02:38 +0000 (0:00:00.675) 0:01:24.426 *** 2025-09-17 16:12:32.370308 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370316 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370323 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370331 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370338 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370346 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370354 | orchestrator | 2025-09-17 16:12:32.370362 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-17 16:12:32.370378 | orchestrator | Wednesday 17 September 2025 16:02:38 +0000 (0:00:00.525) 0:01:24.952 *** 2025-09-17 16:12:32.370391 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370404 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370418 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370433 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370445 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370453 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370461 | orchestrator | 2025-09-17 16:12:32.370469 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-17 16:12:32.370477 | orchestrator | Wednesday 17 September 2025 16:02:39 +0000 (0:00:00.615) 0:01:25.568 *** 2025-09-17 16:12:32.370491 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.370499 | orchestrator | 2025-09-17 16:12:32.370507 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-17 16:12:32.370515 | orchestrator | Wednesday 17 September 2025 16:02:40 +0000 (0:00:00.998) 0:01:26.566 *** 2025-09-17 16:12:32.370523 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.370531 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.370538 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.370546 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.370554 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.370561 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.370569 | orchestrator | 2025-09-17 16:12:32.370577 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-17 16:12:32.370585 | orchestrator | Wednesday 17 September 2025 16:04:00 +0000 (0:01:20.046) 0:02:46.613 *** 2025-09-17 16:12:32.370593 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370600 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370608 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370616 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370624 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370631 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370639 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370647 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370654 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370671 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370679 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370687 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370695 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370702 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370710 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370718 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370726 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370733 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370741 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370748 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370756 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-17 16:12:32.370762 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-17 16:12:32.370769 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-17 16:12:32.370801 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370809 | orchestrator | 2025-09-17 16:12:32.370815 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-17 16:12:32.370822 | orchestrator | Wednesday 17 September 2025 16:04:01 +0000 (0:00:00.853) 0:02:47.466 *** 2025-09-17 16:12:32.370828 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370834 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370841 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370847 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370854 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370861 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370867 | orchestrator | 2025-09-17 16:12:32.370874 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-17 16:12:32.370880 | orchestrator | Wednesday 17 September 2025 16:04:01 +0000 (0:00:00.596) 0:02:48.063 *** 2025-09-17 16:12:32.370887 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370893 | orchestrator | 2025-09-17 16:12:32.370900 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-17 16:12:32.370906 | orchestrator | Wednesday 17 September 2025 16:04:02 +0000 (0:00:00.175) 0:02:48.238 *** 2025-09-17 16:12:32.370913 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370919 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370926 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370932 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370939 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.370945 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.370952 | orchestrator | 2025-09-17 16:12:32.370958 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-17 16:12:32.370965 | orchestrator | Wednesday 17 September 2025 16:04:03 +0000 (0:00:00.856) 0:02:49.095 *** 2025-09-17 16:12:32.370971 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.370977 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.370984 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.370990 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.370997 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371003 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371009 | orchestrator | 2025-09-17 16:12:32.371016 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-17 16:12:32.371022 | orchestrator | Wednesday 17 September 2025 16:04:03 +0000 (0:00:00.674) 0:02:49.770 *** 2025-09-17 16:12:32.371029 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371040 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371047 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371053 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371063 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371070 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371076 | orchestrator | 2025-09-17 16:12:32.371083 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-17 16:12:32.371089 | orchestrator | Wednesday 17 September 2025 16:04:04 +0000 (0:00:00.860) 0:02:50.630 *** 2025-09-17 16:12:32.371096 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.371102 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.371109 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.371115 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.371122 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.371128 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.371134 | orchestrator | 2025-09-17 16:12:32.371141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-17 16:12:32.371147 | orchestrator | Wednesday 17 September 2025 16:04:06 +0000 (0:00:01.935) 0:02:52.566 *** 2025-09-17 16:12:32.371154 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.371160 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.371167 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.371173 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.371180 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.371186 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.371192 | orchestrator | 2025-09-17 16:12:32.371214 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-17 16:12:32.371220 | orchestrator | Wednesday 17 September 2025 16:04:07 +0000 (0:00:00.955) 0:02:53.521 *** 2025-09-17 16:12:32.371227 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.371235 | orchestrator | 2025-09-17 16:12:32.371242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-17 16:12:32.371248 | orchestrator | Wednesday 17 September 2025 16:04:08 +0000 (0:00:01.229) 0:02:54.751 *** 2025-09-17 16:12:32.371255 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371261 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371268 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371274 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371281 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371287 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371294 | orchestrator | 2025-09-17 16:12:32.371300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-17 16:12:32.371307 | orchestrator | Wednesday 17 September 2025 16:04:09 +0000 (0:00:00.659) 0:02:55.411 *** 2025-09-17 16:12:32.371313 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371320 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371326 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371333 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371339 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371346 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371352 | orchestrator | 2025-09-17 16:12:32.371359 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-17 16:12:32.371365 | orchestrator | Wednesday 17 September 2025 16:04:10 +0000 (0:00:00.740) 0:02:56.151 *** 2025-09-17 16:12:32.371372 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371378 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371384 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371391 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371397 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371404 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371410 | orchestrator | 2025-09-17 16:12:32.371417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-17 16:12:32.371452 | orchestrator | Wednesday 17 September 2025 16:04:10 +0000 (0:00:00.465) 0:02:56.617 *** 2025-09-17 16:12:32.371460 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371466 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371473 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371479 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371486 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371492 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371499 | orchestrator | 2025-09-17 16:12:32.371505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-17 16:12:32.371512 | orchestrator | Wednesday 17 September 2025 16:04:11 +0000 (0:00:00.670) 0:02:57.288 *** 2025-09-17 16:12:32.371518 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371525 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371531 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371538 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371544 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371551 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371557 | orchestrator | 2025-09-17 16:12:32.371564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-17 16:12:32.371570 | orchestrator | Wednesday 17 September 2025 16:04:11 +0000 (0:00:00.564) 0:02:57.852 *** 2025-09-17 16:12:32.371577 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371583 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371589 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371596 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371602 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371609 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371615 | orchestrator | 2025-09-17 16:12:32.371622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-17 16:12:32.371628 | orchestrator | Wednesday 17 September 2025 16:04:12 +0000 (0:00:00.716) 0:02:58.569 *** 2025-09-17 16:12:32.371635 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371641 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371654 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371661 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371667 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371674 | orchestrator | 2025-09-17 16:12:32.371680 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-17 16:12:32.371687 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.627) 0:02:59.197 *** 2025-09-17 16:12:32.371693 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.371700 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.371710 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.371716 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.371723 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.371729 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.371735 | orchestrator | 2025-09-17 16:12:32.371742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-17 16:12:32.371749 | orchestrator | Wednesday 17 September 2025 16:04:13 +0000 (0:00:00.695) 0:02:59.892 *** 2025-09-17 16:12:32.371755 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.371762 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.371768 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.371775 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.371781 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.371788 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.371794 | orchestrator | 2025-09-17 16:12:32.371801 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-17 16:12:32.371807 | orchestrator | Wednesday 17 September 2025 16:04:14 +0000 (0:00:01.008) 0:03:00.900 *** 2025-09-17 16:12:32.371814 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.371826 | orchestrator | 2025-09-17 16:12:32.371833 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-17 16:12:32.371839 | orchestrator | Wednesday 17 September 2025 16:04:15 +0000 (0:00:01.007) 0:03:01.908 *** 2025-09-17 16:12:32.371846 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-17 16:12:32.371852 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-17 16:12:32.371859 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-17 16:12:32.371865 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-17 16:12:32.371872 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371885 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-17 16:12:32.371891 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.371904 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.371911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371917 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-17 16:12:32.371924 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.371930 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.371943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.371950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.371956 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-17 16:12:32.371963 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.371969 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.371976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372000 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.372008 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372014 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-17 16:12:32.372021 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.372027 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372033 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372040 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372047 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372053 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-17 16:12:32.372059 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372066 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372072 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372085 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372092 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372098 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-17 16:12:32.372105 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372111 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372124 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372136 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372142 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-17 16:12:32.372149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372155 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372161 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372168 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372178 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-17 16:12:32.372184 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372211 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372217 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372224 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372230 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-17 16:12:32.372237 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372250 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372256 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372263 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372269 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-17 16:12:32.372275 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372295 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372301 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372308 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-17 16:12:32.372314 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372334 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-17 16:12:32.372347 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372359 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372372 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-17 16:12:32.372379 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372385 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372417 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-17 16:12:32.372432 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372438 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-17 16:12:32.372445 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372451 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-17 16:12:32.372458 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-17 16:12:32.372464 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-17 16:12:32.372471 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372477 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-17 16:12:32.372484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-17 16:12:32.372490 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-17 16:12:32.372497 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-17 16:12:32.372503 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-17 16:12:32.372510 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-17 16:12:32.372516 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-17 16:12:32.372523 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-17 16:12:32.372529 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-17 16:12:32.372536 | orchestrator | 2025-09-17 16:12:32.372542 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-17 16:12:32.372549 | orchestrator | Wednesday 17 September 2025 16:04:22 +0000 (0:00:07.100) 0:03:09.008 *** 2025-09-17 16:12:32.372555 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372562 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372568 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372575 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.372581 | orchestrator | 2025-09-17 16:12:32.372588 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-17 16:12:32.372594 | orchestrator | Wednesday 17 September 2025 16:04:23 +0000 (0:00:00.769) 0:03:09.777 *** 2025-09-17 16:12:32.372604 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372611 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372617 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372624 | orchestrator | 2025-09-17 16:12:32.372631 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-17 16:12:32.372637 | orchestrator | Wednesday 17 September 2025 16:04:24 +0000 (0:00:00.787) 0:03:10.565 *** 2025-09-17 16:12:32.372644 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372650 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372657 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.372663 | orchestrator | 2025-09-17 16:12:32.372670 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-17 16:12:32.372676 | orchestrator | Wednesday 17 September 2025 16:04:25 +0000 (0:00:01.315) 0:03:11.881 *** 2025-09-17 16:12:32.372683 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372689 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372696 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372709 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.372716 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.372723 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.372729 | orchestrator | 2025-09-17 16:12:32.372736 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-17 16:12:32.372742 | orchestrator | Wednesday 17 September 2025 16:04:26 +0000 (0:00:00.666) 0:03:12.548 *** 2025-09-17 16:12:32.372749 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372755 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372761 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372768 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.372774 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.372781 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.372787 | orchestrator | 2025-09-17 16:12:32.372794 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-17 16:12:32.372800 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.629) 0:03:13.177 *** 2025-09-17 16:12:32.372807 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372814 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372820 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.372827 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372833 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.372839 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.372846 | orchestrator | 2025-09-17 16:12:32.372852 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-17 16:12:32.372859 | orchestrator | Wednesday 17 September 2025 16:04:27 +0000 (0:00:00.771) 0:03:13.948 *** 2025-09-17 16:12:32.372865 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372872 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372896 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372904 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.372910 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.372917 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.372923 | orchestrator | 2025-09-17 16:12:32.372930 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-17 16:12:32.372936 | orchestrator | Wednesday 17 September 2025 16:04:28 +0000 (0:00:00.710) 0:03:14.659 *** 2025-09-17 16:12:32.372943 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.372949 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.372955 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.372962 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.372969 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.372975 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.372981 | orchestrator | 2025-09-17 16:12:32.372988 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-17 16:12:32.372995 | orchestrator | Wednesday 17 September 2025 16:04:29 +0000 (0:00:00.866) 0:03:15.526 *** 2025-09-17 16:12:32.373001 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373008 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373014 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373020 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373027 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373033 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373039 | orchestrator | 2025-09-17 16:12:32.373046 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-17 16:12:32.373052 | orchestrator | Wednesday 17 September 2025 16:04:30 +0000 (0:00:00.799) 0:03:16.325 *** 2025-09-17 16:12:32.373059 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373065 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373071 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373078 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373084 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373090 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373101 | orchestrator | 2025-09-17 16:12:32.373108 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-17 16:12:32.373114 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.786) 0:03:17.111 *** 2025-09-17 16:12:32.373121 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373128 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373134 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373140 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373146 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373156 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373162 | orchestrator | 2025-09-17 16:12:32.373169 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-17 16:12:32.373175 | orchestrator | Wednesday 17 September 2025 16:04:31 +0000 (0:00:00.539) 0:03:17.651 *** 2025-09-17 16:12:32.373182 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373188 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373210 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373217 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.373224 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.373230 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.373237 | orchestrator | 2025-09-17 16:12:32.373243 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-17 16:12:32.373250 | orchestrator | Wednesday 17 September 2025 16:04:34 +0000 (0:00:03.319) 0:03:20.971 *** 2025-09-17 16:12:32.373257 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373263 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373269 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373276 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.373282 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.373289 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.373295 | orchestrator | 2025-09-17 16:12:32.373302 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-17 16:12:32.373308 | orchestrator | Wednesday 17 September 2025 16:04:35 +0000 (0:00:00.796) 0:03:21.768 *** 2025-09-17 16:12:32.373315 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373321 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373327 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373334 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.373340 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.373347 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.373353 | orchestrator | 2025-09-17 16:12:32.373359 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-17 16:12:32.373366 | orchestrator | Wednesday 17 September 2025 16:04:36 +0000 (0:00:00.778) 0:03:22.546 *** 2025-09-17 16:12:32.373372 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373379 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373385 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373392 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373398 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373404 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373411 | orchestrator | 2025-09-17 16:12:32.373417 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-17 16:12:32.373424 | orchestrator | Wednesday 17 September 2025 16:04:36 +0000 (0:00:00.520) 0:03:23.067 *** 2025-09-17 16:12:32.373430 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373437 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373443 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373450 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.373456 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.373463 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.373473 | orchestrator | 2025-09-17 16:12:32.373480 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-17 16:12:32.373505 | orchestrator | Wednesday 17 September 2025 16:04:37 +0000 (0:00:00.696) 0:03:23.763 *** 2025-09-17 16:12:32.373513 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373521 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-17 16:12:32.373529 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-17 16:12:32.373538 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-17 16:12:32.373545 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373551 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373558 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373565 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-17 16:12:32.373571 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373581 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-17 16:12:32.373588 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-17 16:12:32.373595 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373602 | orchestrator | 2025-09-17 16:12:32.373608 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-17 16:12:32.373615 | orchestrator | Wednesday 17 September 2025 16:04:39 +0000 (0:00:01.747) 0:03:25.511 *** 2025-09-17 16:12:32.373621 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373628 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373634 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373640 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373646 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373653 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373659 | orchestrator | 2025-09-17 16:12:32.373666 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-17 16:12:32.373672 | orchestrator | Wednesday 17 September 2025 16:04:40 +0000 (0:00:00.851) 0:03:26.363 *** 2025-09-17 16:12:32.373679 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373685 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373691 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373698 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373704 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373714 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373721 | orchestrator | 2025-09-17 16:12:32.373727 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 16:12:32.373734 | orchestrator | Wednesday 17 September 2025 16:04:40 +0000 (0:00:00.540) 0:03:26.903 *** 2025-09-17 16:12:32.373740 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373747 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373753 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373759 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373766 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373772 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373778 | orchestrator | 2025-09-17 16:12:32.373785 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 16:12:32.373791 | orchestrator | Wednesday 17 September 2025 16:04:41 +0000 (0:00:00.940) 0:03:27.844 *** 2025-09-17 16:12:32.373798 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373804 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373811 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373817 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373823 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373830 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373836 | orchestrator | 2025-09-17 16:12:32.373843 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 16:12:32.373849 | orchestrator | Wednesday 17 September 2025 16:04:42 +0000 (0:00:00.700) 0:03:28.545 *** 2025-09-17 16:12:32.373856 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373862 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373868 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373892 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.373900 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.373907 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.373913 | orchestrator | 2025-09-17 16:12:32.373920 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 16:12:32.373926 | orchestrator | Wednesday 17 September 2025 16:04:43 +0000 (0:00:01.039) 0:03:29.585 *** 2025-09-17 16:12:32.373933 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.373939 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.373946 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.373952 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.373959 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.373965 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.373972 | orchestrator | 2025-09-17 16:12:32.373978 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 16:12:32.373985 | orchestrator | Wednesday 17 September 2025 16:04:44 +0000 (0:00:01.456) 0:03:31.041 *** 2025-09-17 16:12:32.373991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-17 16:12:32.373998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-17 16:12:32.374004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-17 16:12:32.374011 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374039 | orchestrator | 2025-09-17 16:12:32.374046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 16:12:32.374052 | orchestrator | Wednesday 17 September 2025 16:04:45 +0000 (0:00:00.485) 0:03:31.527 *** 2025-09-17 16:12:32.374059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-17 16:12:32.374065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-17 16:12:32.374071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-17 16:12:32.374078 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374084 | orchestrator | 2025-09-17 16:12:32.374090 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 16:12:32.374097 | orchestrator | Wednesday 17 September 2025 16:04:46 +0000 (0:00:00.577) 0:03:32.104 *** 2025-09-17 16:12:32.374108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-17 16:12:32.374115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-17 16:12:32.374121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-17 16:12:32.374127 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374134 | orchestrator | 2025-09-17 16:12:32.374140 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 16:12:32.374150 | orchestrator | Wednesday 17 September 2025 16:04:46 +0000 (0:00:00.757) 0:03:32.861 *** 2025-09-17 16:12:32.374157 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374163 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.374170 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.374176 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.374183 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.374189 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.374211 | orchestrator | 2025-09-17 16:12:32.374218 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 16:12:32.374224 | orchestrator | Wednesday 17 September 2025 16:04:47 +0000 (0:00:00.595) 0:03:33.457 *** 2025-09-17 16:12:32.374231 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-17 16:12:32.374237 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-17 16:12:32.374244 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-17 16:12:32.374250 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374257 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.374263 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.374269 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 16:12:32.374276 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 16:12:32.374282 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 16:12:32.374288 | orchestrator | 2025-09-17 16:12:32.374295 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-17 16:12:32.374301 | orchestrator | Wednesday 17 September 2025 16:04:49 +0000 (0:00:01.876) 0:03:35.333 *** 2025-09-17 16:12:32.374308 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.374314 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.374321 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.374327 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.374333 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.374340 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.374346 | orchestrator | 2025-09-17 16:12:32.374352 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.374359 | orchestrator | Wednesday 17 September 2025 16:04:52 +0000 (0:00:02.928) 0:03:38.262 *** 2025-09-17 16:12:32.374365 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.374371 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.374378 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.374384 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.374391 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.374397 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.374403 | orchestrator | 2025-09-17 16:12:32.374410 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-17 16:12:32.374416 | orchestrator | Wednesday 17 September 2025 16:04:53 +0000 (0:00:01.151) 0:03:39.414 *** 2025-09-17 16:12:32.374423 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374429 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.374436 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.374442 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.374449 | orchestrator | 2025-09-17 16:12:32.374455 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-17 16:12:32.374462 | orchestrator | Wednesday 17 September 2025 16:04:54 +0000 (0:00:01.012) 0:03:40.426 *** 2025-09-17 16:12:32.374473 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.374479 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.374485 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.374492 | orchestrator | 2025-09-17 16:12:32.374498 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-17 16:12:32.374526 | orchestrator | Wednesday 17 September 2025 16:04:54 +0000 (0:00:00.333) 0:03:40.760 *** 2025-09-17 16:12:32.374534 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.374541 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.374547 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.374554 | orchestrator | 2025-09-17 16:12:32.374560 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-17 16:12:32.374567 | orchestrator | Wednesday 17 September 2025 16:04:56 +0000 (0:00:01.351) 0:03:42.112 *** 2025-09-17 16:12:32.374574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.374580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.374587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.374593 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374599 | orchestrator | 2025-09-17 16:12:32.374606 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-17 16:12:32.374613 | orchestrator | Wednesday 17 September 2025 16:04:56 +0000 (0:00:00.673) 0:03:42.785 *** 2025-09-17 16:12:32.374619 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.374626 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.374632 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.374639 | orchestrator | 2025-09-17 16:12:32.374645 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-17 16:12:32.374652 | orchestrator | Wednesday 17 September 2025 16:04:57 +0000 (0:00:00.415) 0:03:43.201 *** 2025-09-17 16:12:32.374658 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.374664 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.374671 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.374677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.374684 | orchestrator | 2025-09-17 16:12:32.374691 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-17 16:12:32.374697 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:00.946) 0:03:44.148 *** 2025-09-17 16:12:32.374704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.374710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.374717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.374723 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374730 | orchestrator | 2025-09-17 16:12:32.374736 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-17 16:12:32.374746 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:00.436) 0:03:44.585 *** 2025-09-17 16:12:32.374753 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374760 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.374766 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.374773 | orchestrator | 2025-09-17 16:12:32.374779 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-17 16:12:32.374786 | orchestrator | Wednesday 17 September 2025 16:04:58 +0000 (0:00:00.430) 0:03:45.015 *** 2025-09-17 16:12:32.374792 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374799 | orchestrator | 2025-09-17 16:12:32.374805 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-17 16:12:32.374812 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.189) 0:03:45.205 *** 2025-09-17 16:12:32.374818 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374825 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.374831 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.374842 | orchestrator | 2025-09-17 16:12:32.374848 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-17 16:12:32.374855 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.243) 0:03:45.448 *** 2025-09-17 16:12:32.374861 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374868 | orchestrator | 2025-09-17 16:12:32.374874 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-17 16:12:32.374881 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.191) 0:03:45.639 *** 2025-09-17 16:12:32.374887 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374894 | orchestrator | 2025-09-17 16:12:32.374900 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-17 16:12:32.374907 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.191) 0:03:45.831 *** 2025-09-17 16:12:32.374913 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374920 | orchestrator | 2025-09-17 16:12:32.374926 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-17 16:12:32.374933 | orchestrator | Wednesday 17 September 2025 16:04:59 +0000 (0:00:00.094) 0:03:45.925 *** 2025-09-17 16:12:32.374939 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374946 | orchestrator | 2025-09-17 16:12:32.374952 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-17 16:12:32.374959 | orchestrator | Wednesday 17 September 2025 16:05:00 +0000 (0:00:00.196) 0:03:46.122 *** 2025-09-17 16:12:32.374965 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.374971 | orchestrator | 2025-09-17 16:12:32.374978 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-17 16:12:32.374984 | orchestrator | Wednesday 17 September 2025 16:05:00 +0000 (0:00:00.167) 0:03:46.290 *** 2025-09-17 16:12:32.374991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.374998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.375004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.375011 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375017 | orchestrator | 2025-09-17 16:12:32.375024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-17 16:12:32.375030 | orchestrator | Wednesday 17 September 2025 16:05:00 +0000 (0:00:00.516) 0:03:46.807 *** 2025-09-17 16:12:32.375037 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375043 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.375049 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.375056 | orchestrator | 2025-09-17 16:12:32.375081 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-17 16:12:32.375089 | orchestrator | Wednesday 17 September 2025 16:05:01 +0000 (0:00:00.408) 0:03:47.215 *** 2025-09-17 16:12:32.375095 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375102 | orchestrator | 2025-09-17 16:12:32.375108 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-17 16:12:32.375115 | orchestrator | Wednesday 17 September 2025 16:05:01 +0000 (0:00:00.188) 0:03:47.403 *** 2025-09-17 16:12:32.375121 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375128 | orchestrator | 2025-09-17 16:12:32.375134 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-17 16:12:32.375141 | orchestrator | Wednesday 17 September 2025 16:05:01 +0000 (0:00:00.190) 0:03:47.594 *** 2025-09-17 16:12:32.375147 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.375154 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.375160 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.375167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.375173 | orchestrator | 2025-09-17 16:12:32.375180 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-17 16:12:32.375186 | orchestrator | Wednesday 17 September 2025 16:05:02 +0000 (0:00:00.820) 0:03:48.415 *** 2025-09-17 16:12:32.375235 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.375243 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.375250 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.375256 | orchestrator | 2025-09-17 16:12:32.375263 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-17 16:12:32.375269 | orchestrator | Wednesday 17 September 2025 16:05:02 +0000 (0:00:00.285) 0:03:48.701 *** 2025-09-17 16:12:32.375276 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.375282 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.375289 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.375295 | orchestrator | 2025-09-17 16:12:32.375302 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-17 16:12:32.375308 | orchestrator | Wednesday 17 September 2025 16:05:03 +0000 (0:00:01.337) 0:03:50.038 *** 2025-09-17 16:12:32.375315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.375321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.375328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.375334 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375340 | orchestrator | 2025-09-17 16:12:32.375351 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-17 16:12:32.375357 | orchestrator | Wednesday 17 September 2025 16:05:04 +0000 (0:00:00.711) 0:03:50.749 *** 2025-09-17 16:12:32.375365 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.375375 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.375385 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.375396 | orchestrator | 2025-09-17 16:12:32.375406 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-17 16:12:32.375415 | orchestrator | Wednesday 17 September 2025 16:05:04 +0000 (0:00:00.291) 0:03:51.040 *** 2025-09-17 16:12:32.375424 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.375434 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.375444 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.375454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.375463 | orchestrator | 2025-09-17 16:12:32.375472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-17 16:12:32.375481 | orchestrator | Wednesday 17 September 2025 16:05:05 +0000 (0:00:00.927) 0:03:51.968 *** 2025-09-17 16:12:32.375491 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.375501 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.375512 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.375521 | orchestrator | 2025-09-17 16:12:32.375527 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-17 16:12:32.375534 | orchestrator | Wednesday 17 September 2025 16:05:06 +0000 (0:00:00.281) 0:03:52.249 *** 2025-09-17 16:12:32.375540 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.375546 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.375552 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.375557 | orchestrator | 2025-09-17 16:12:32.375563 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-17 16:12:32.375570 | orchestrator | Wednesday 17 September 2025 16:05:07 +0000 (0:00:01.353) 0:03:53.603 *** 2025-09-17 16:12:32.375576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.375582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.375588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.375594 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375600 | orchestrator | 2025-09-17 16:12:32.375606 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-17 16:12:32.375612 | orchestrator | Wednesday 17 September 2025 16:05:08 +0000 (0:00:00.533) 0:03:54.136 *** 2025-09-17 16:12:32.375618 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.375624 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.375636 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.375642 | orchestrator | 2025-09-17 16:12:32.375648 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-17 16:12:32.375654 | orchestrator | Wednesday 17 September 2025 16:05:08 +0000 (0:00:00.332) 0:03:54.469 *** 2025-09-17 16:12:32.375660 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.375666 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.375672 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.375678 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375684 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.375690 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.375696 | orchestrator | 2025-09-17 16:12:32.375702 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-17 16:12:32.375709 | orchestrator | Wednesday 17 September 2025 16:05:09 +0000 (0:00:00.686) 0:03:55.155 *** 2025-09-17 16:12:32.375740 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.375748 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.375754 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.375760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.375766 | orchestrator | 2025-09-17 16:12:32.375772 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-17 16:12:32.375778 | orchestrator | Wednesday 17 September 2025 16:05:09 +0000 (0:00:00.741) 0:03:55.896 *** 2025-09-17 16:12:32.375784 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.375790 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.375796 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.375802 | orchestrator | 2025-09-17 16:12:32.375808 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-17 16:12:32.375814 | orchestrator | Wednesday 17 September 2025 16:05:10 +0000 (0:00:00.406) 0:03:56.302 *** 2025-09-17 16:12:32.375820 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.375826 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.375832 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.375838 | orchestrator | 2025-09-17 16:12:32.375844 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-17 16:12:32.375850 | orchestrator | Wednesday 17 September 2025 16:05:11 +0000 (0:00:01.129) 0:03:57.432 *** 2025-09-17 16:12:32.375856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.375862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.375868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.375874 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.375880 | orchestrator | 2025-09-17 16:12:32.375887 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-17 16:12:32.375893 | orchestrator | Wednesday 17 September 2025 16:05:11 +0000 (0:00:00.549) 0:03:57.981 *** 2025-09-17 16:12:32.375899 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.375905 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.375911 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.375917 | orchestrator | 2025-09-17 16:12:32.375923 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-17 16:12:32.375929 | orchestrator | 2025-09-17 16:12:32.375935 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.375941 | orchestrator | Wednesday 17 September 2025 16:05:12 +0000 (0:00:00.520) 0:03:58.502 *** 2025-09-17 16:12:32.375952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.375958 | orchestrator | 2025-09-17 16:12:32.375964 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.375970 | orchestrator | Wednesday 17 September 2025 16:05:13 +0000 (0:00:00.599) 0:03:59.101 *** 2025-09-17 16:12:32.375976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.375991 | orchestrator | 2025-09-17 16:12:32.375997 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.376003 | orchestrator | Wednesday 17 September 2025 16:05:13 +0000 (0:00:00.468) 0:03:59.570 *** 2025-09-17 16:12:32.376009 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376015 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376021 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376027 | orchestrator | 2025-09-17 16:12:32.376033 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.376039 | orchestrator | Wednesday 17 September 2025 16:05:14 +0000 (0:00:00.806) 0:04:00.377 *** 2025-09-17 16:12:32.376045 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376051 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376057 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376063 | orchestrator | 2025-09-17 16:12:32.376069 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.376075 | orchestrator | Wednesday 17 September 2025 16:05:14 +0000 (0:00:00.279) 0:04:00.656 *** 2025-09-17 16:12:32.376081 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376087 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376093 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376099 | orchestrator | 2025-09-17 16:12:32.376105 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.376111 | orchestrator | Wednesday 17 September 2025 16:05:14 +0000 (0:00:00.252) 0:04:00.909 *** 2025-09-17 16:12:32.376117 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376123 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376129 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376135 | orchestrator | 2025-09-17 16:12:32.376141 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.376148 | orchestrator | Wednesday 17 September 2025 16:05:15 +0000 (0:00:00.272) 0:04:01.181 *** 2025-09-17 16:12:32.376154 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376160 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376166 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376172 | orchestrator | 2025-09-17 16:12:32.376178 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.376184 | orchestrator | Wednesday 17 September 2025 16:05:16 +0000 (0:00:00.919) 0:04:02.101 *** 2025-09-17 16:12:32.376190 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376210 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376216 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376222 | orchestrator | 2025-09-17 16:12:32.376228 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.376235 | orchestrator | Wednesday 17 September 2025 16:05:16 +0000 (0:00:00.275) 0:04:02.377 *** 2025-09-17 16:12:32.376241 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376247 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376253 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376259 | orchestrator | 2025-09-17 16:12:32.376265 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.376290 | orchestrator | Wednesday 17 September 2025 16:05:16 +0000 (0:00:00.257) 0:04:02.634 *** 2025-09-17 16:12:32.376297 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376303 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376309 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376315 | orchestrator | 2025-09-17 16:12:32.376321 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.376327 | orchestrator | Wednesday 17 September 2025 16:05:17 +0000 (0:00:00.725) 0:04:03.360 *** 2025-09-17 16:12:32.376333 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376339 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376345 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376356 | orchestrator | 2025-09-17 16:12:32.376362 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.376368 | orchestrator | Wednesday 17 September 2025 16:05:18 +0000 (0:00:00.841) 0:04:04.202 *** 2025-09-17 16:12:32.376374 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376380 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376386 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376392 | orchestrator | 2025-09-17 16:12:32.376398 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.376404 | orchestrator | Wednesday 17 September 2025 16:05:18 +0000 (0:00:00.282) 0:04:04.484 *** 2025-09-17 16:12:32.376410 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376416 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376422 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376428 | orchestrator | 2025-09-17 16:12:32.376435 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.376441 | orchestrator | Wednesday 17 September 2025 16:05:18 +0000 (0:00:00.273) 0:04:04.758 *** 2025-09-17 16:12:32.376447 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376453 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376459 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376465 | orchestrator | 2025-09-17 16:12:32.376471 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.376477 | orchestrator | Wednesday 17 September 2025 16:05:18 +0000 (0:00:00.276) 0:04:05.035 *** 2025-09-17 16:12:32.376483 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376489 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376495 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376501 | orchestrator | 2025-09-17 16:12:32.376507 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.376513 | orchestrator | Wednesday 17 September 2025 16:05:19 +0000 (0:00:00.265) 0:04:05.300 *** 2025-09-17 16:12:32.376519 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376529 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376535 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376541 | orchestrator | 2025-09-17 16:12:32.376547 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.376553 | orchestrator | Wednesday 17 September 2025 16:05:19 +0000 (0:00:00.429) 0:04:05.729 *** 2025-09-17 16:12:32.376559 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376565 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376571 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376577 | orchestrator | 2025-09-17 16:12:32.376583 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.376589 | orchestrator | Wednesday 17 September 2025 16:05:19 +0000 (0:00:00.337) 0:04:06.066 *** 2025-09-17 16:12:32.376595 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376601 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.376607 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.376613 | orchestrator | 2025-09-17 16:12:32.376619 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.376625 | orchestrator | Wednesday 17 September 2025 16:05:20 +0000 (0:00:00.328) 0:04:06.395 *** 2025-09-17 16:12:32.376631 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376637 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376643 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376649 | orchestrator | 2025-09-17 16:12:32.376656 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.376661 | orchestrator | Wednesday 17 September 2025 16:05:20 +0000 (0:00:00.325) 0:04:06.721 *** 2025-09-17 16:12:32.376667 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376673 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376679 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376685 | orchestrator | 2025-09-17 16:12:32.376691 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.376701 | orchestrator | Wednesday 17 September 2025 16:05:21 +0000 (0:00:00.575) 0:04:07.296 *** 2025-09-17 16:12:32.376707 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376713 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376719 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376725 | orchestrator | 2025-09-17 16:12:32.376731 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-17 16:12:32.376737 | orchestrator | Wednesday 17 September 2025 16:05:21 +0000 (0:00:00.600) 0:04:07.896 *** 2025-09-17 16:12:32.376743 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376749 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376756 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376762 | orchestrator | 2025-09-17 16:12:32.376768 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-17 16:12:32.376774 | orchestrator | Wednesday 17 September 2025 16:05:22 +0000 (0:00:00.348) 0:04:08.244 *** 2025-09-17 16:12:32.376780 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.376786 | orchestrator | 2025-09-17 16:12:32.376792 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-17 16:12:32.376798 | orchestrator | Wednesday 17 September 2025 16:05:22 +0000 (0:00:00.654) 0:04:08.899 *** 2025-09-17 16:12:32.376804 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.376810 | orchestrator | 2025-09-17 16:12:32.376816 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-17 16:12:32.376822 | orchestrator | Wednesday 17 September 2025 16:05:22 +0000 (0:00:00.136) 0:04:09.036 *** 2025-09-17 16:12:32.376828 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-17 16:12:32.376834 | orchestrator | 2025-09-17 16:12:32.376858 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-17 16:12:32.376865 | orchestrator | Wednesday 17 September 2025 16:05:23 +0000 (0:00:00.903) 0:04:09.939 *** 2025-09-17 16:12:32.376871 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376877 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376883 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376889 | orchestrator | 2025-09-17 16:12:32.376895 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-17 16:12:32.376902 | orchestrator | Wednesday 17 September 2025 16:05:24 +0000 (0:00:00.280) 0:04:10.220 *** 2025-09-17 16:12:32.376908 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.376914 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.376920 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.376926 | orchestrator | 2025-09-17 16:12:32.376932 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-17 16:12:32.376938 | orchestrator | Wednesday 17 September 2025 16:05:24 +0000 (0:00:00.462) 0:04:10.682 *** 2025-09-17 16:12:32.376944 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.376950 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.376956 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.376962 | orchestrator | 2025-09-17 16:12:32.376968 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-17 16:12:32.376974 | orchestrator | Wednesday 17 September 2025 16:05:25 +0000 (0:00:01.230) 0:04:11.913 *** 2025-09-17 16:12:32.376980 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.376986 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.376992 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.376998 | orchestrator | 2025-09-17 16:12:32.377005 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-17 16:12:32.377011 | orchestrator | Wednesday 17 September 2025 16:05:26 +0000 (0:00:00.768) 0:04:12.681 *** 2025-09-17 16:12:32.377017 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377023 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377029 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377035 | orchestrator | 2025-09-17 16:12:32.377041 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-17 16:12:32.377051 | orchestrator | Wednesday 17 September 2025 16:05:27 +0000 (0:00:00.661) 0:04:13.343 *** 2025-09-17 16:12:32.377057 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377063 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.377069 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.377075 | orchestrator | 2025-09-17 16:12:32.377081 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-17 16:12:32.377087 | orchestrator | Wednesday 17 September 2025 16:05:28 +0000 (0:00:00.834) 0:04:14.177 *** 2025-09-17 16:12:32.377096 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377103 | orchestrator | 2025-09-17 16:12:32.377109 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-17 16:12:32.377115 | orchestrator | Wednesday 17 September 2025 16:05:29 +0000 (0:00:01.240) 0:04:15.417 *** 2025-09-17 16:12:32.377121 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377127 | orchestrator | 2025-09-17 16:12:32.377133 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-17 16:12:32.377139 | orchestrator | Wednesday 17 September 2025 16:05:29 +0000 (0:00:00.645) 0:04:16.062 *** 2025-09-17 16:12:32.377145 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.377151 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.377157 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.377163 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-17 16:12:32.377169 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:12:32.377176 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:12:32.377182 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:12:32.377188 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-17 16:12:32.377205 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-17 16:12:32.377212 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-17 16:12:32.377218 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:12:32.377224 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-17 16:12:32.377230 | orchestrator | 2025-09-17 16:12:32.377236 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-17 16:12:32.377242 | orchestrator | Wednesday 17 September 2025 16:05:33 +0000 (0:00:03.504) 0:04:19.567 *** 2025-09-17 16:12:32.377248 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377254 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377260 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377266 | orchestrator | 2025-09-17 16:12:32.377272 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-17 16:12:32.377279 | orchestrator | Wednesday 17 September 2025 16:05:34 +0000 (0:00:01.132) 0:04:20.699 *** 2025-09-17 16:12:32.377284 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377290 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.377297 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.377303 | orchestrator | 2025-09-17 16:12:32.377309 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-17 16:12:32.377315 | orchestrator | Wednesday 17 September 2025 16:05:35 +0000 (0:00:00.441) 0:04:21.141 *** 2025-09-17 16:12:32.377321 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377327 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.377333 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.377339 | orchestrator | 2025-09-17 16:12:32.377345 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-17 16:12:32.377352 | orchestrator | Wednesday 17 September 2025 16:05:35 +0000 (0:00:00.285) 0:04:21.427 *** 2025-09-17 16:12:32.377358 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377364 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377374 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377380 | orchestrator | 2025-09-17 16:12:32.377386 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-17 16:12:32.377411 | orchestrator | Wednesday 17 September 2025 16:05:36 +0000 (0:00:01.371) 0:04:22.798 *** 2025-09-17 16:12:32.377418 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377425 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377431 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377437 | orchestrator | 2025-09-17 16:12:32.377443 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-17 16:12:32.377449 | orchestrator | Wednesday 17 September 2025 16:05:37 +0000 (0:00:01.190) 0:04:23.989 *** 2025-09-17 16:12:32.377455 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.377461 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.377467 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.377473 | orchestrator | 2025-09-17 16:12:32.377479 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-17 16:12:32.377485 | orchestrator | Wednesday 17 September 2025 16:05:38 +0000 (0:00:00.262) 0:04:24.252 *** 2025-09-17 16:12:32.377491 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.377497 | orchestrator | 2025-09-17 16:12:32.377503 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-17 16:12:32.377509 | orchestrator | Wednesday 17 September 2025 16:05:38 +0000 (0:00:00.603) 0:04:24.855 *** 2025-09-17 16:12:32.377515 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.377521 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.377527 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.377533 | orchestrator | 2025-09-17 16:12:32.377539 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-17 16:12:32.377545 | orchestrator | Wednesday 17 September 2025 16:05:39 +0000 (0:00:00.293) 0:04:25.148 *** 2025-09-17 16:12:32.377551 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.377557 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.377563 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.377569 | orchestrator | 2025-09-17 16:12:32.377575 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-17 16:12:32.377581 | orchestrator | Wednesday 17 September 2025 16:05:39 +0000 (0:00:00.267) 0:04:25.416 *** 2025-09-17 16:12:32.377587 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.377593 | orchestrator | 2025-09-17 16:12:32.377599 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-17 16:12:32.377605 | orchestrator | Wednesday 17 September 2025 16:05:39 +0000 (0:00:00.628) 0:04:26.044 *** 2025-09-17 16:12:32.377615 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377621 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377627 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377633 | orchestrator | 2025-09-17 16:12:32.377639 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-17 16:12:32.377645 | orchestrator | Wednesday 17 September 2025 16:05:41 +0000 (0:00:01.434) 0:04:27.479 *** 2025-09-17 16:12:32.377651 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377657 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377663 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377669 | orchestrator | 2025-09-17 16:12:32.377675 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-17 16:12:32.377681 | orchestrator | Wednesday 17 September 2025 16:05:42 +0000 (0:00:01.102) 0:04:28.581 *** 2025-09-17 16:12:32.377687 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377693 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377699 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377705 | orchestrator | 2025-09-17 16:12:32.377711 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-17 16:12:32.377723 | orchestrator | Wednesday 17 September 2025 16:05:44 +0000 (0:00:01.923) 0:04:30.505 *** 2025-09-17 16:12:32.377729 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.377735 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.377741 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.377746 | orchestrator | 2025-09-17 16:12:32.377753 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-17 16:12:32.377759 | orchestrator | Wednesday 17 September 2025 16:05:46 +0000 (0:00:01.852) 0:04:32.357 *** 2025-09-17 16:12:32.377765 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.377771 | orchestrator | 2025-09-17 16:12:32.377777 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-17 16:12:32.377783 | orchestrator | Wednesday 17 September 2025 16:05:46 +0000 (0:00:00.486) 0:04:32.844 *** 2025-09-17 16:12:32.377789 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-17 16:12:32.377795 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377801 | orchestrator | 2025-09-17 16:12:32.377807 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-17 16:12:32.377812 | orchestrator | Wednesday 17 September 2025 16:06:08 +0000 (0:00:22.113) 0:04:54.957 *** 2025-09-17 16:12:32.377818 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.377825 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.377831 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.377837 | orchestrator | 2025-09-17 16:12:32.377843 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-17 16:12:32.377849 | orchestrator | Wednesday 17 September 2025 16:06:19 +0000 (0:00:10.782) 0:05:05.739 *** 2025-09-17 16:12:32.377855 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.377861 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.377867 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.377873 | orchestrator | 2025-09-17 16:12:32.377879 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-17 16:12:32.377885 | orchestrator | Wednesday 17 September 2025 16:06:19 +0000 (0:00:00.296) 0:05:06.036 *** 2025-09-17 16:12:32.377910 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-17 16:12:32.377919 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-17 16:12:32.377926 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-17 16:12:32.377933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-17 16:12:32.377942 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-17 16:12:32.377954 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1671f77338f7294ca214ea60babeec5c9b095c57'}])  2025-09-17 16:12:32.377962 | orchestrator | 2025-09-17 16:12:32.377968 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.377974 | orchestrator | Wednesday 17 September 2025 16:06:34 +0000 (0:00:14.955) 0:05:20.991 *** 2025-09-17 16:12:32.377980 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.377986 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.377992 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.377998 | orchestrator | 2025-09-17 16:12:32.378004 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-17 16:12:32.378010 | orchestrator | Wednesday 17 September 2025 16:06:35 +0000 (0:00:00.423) 0:05:21.414 *** 2025-09-17 16:12:32.378034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.378042 | orchestrator | 2025-09-17 16:12:32.378048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-17 16:12:32.378054 | orchestrator | Wednesday 17 September 2025 16:06:35 +0000 (0:00:00.586) 0:05:22.000 *** 2025-09-17 16:12:32.378060 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378066 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378072 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378078 | orchestrator | 2025-09-17 16:12:32.378084 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-17 16:12:32.378090 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:00.547) 0:05:22.548 *** 2025-09-17 16:12:32.378096 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378102 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378108 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378114 | orchestrator | 2025-09-17 16:12:32.378120 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-17 16:12:32.378126 | orchestrator | Wednesday 17 September 2025 16:06:36 +0000 (0:00:00.385) 0:05:22.933 *** 2025-09-17 16:12:32.378132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.378138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.378144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.378150 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378156 | orchestrator | 2025-09-17 16:12:32.378162 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-17 16:12:32.378168 | orchestrator | Wednesday 17 September 2025 16:06:37 +0000 (0:00:00.619) 0:05:23.553 *** 2025-09-17 16:12:32.378174 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378180 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378186 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378191 | orchestrator | 2025-09-17 16:12:32.378208 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-17 16:12:32.378214 | orchestrator | 2025-09-17 16:12:32.378220 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.378246 | orchestrator | Wednesday 17 September 2025 16:06:38 +0000 (0:00:00.549) 0:05:24.102 *** 2025-09-17 16:12:32.378253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.378259 | orchestrator | 2025-09-17 16:12:32.378270 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.378276 | orchestrator | Wednesday 17 September 2025 16:06:38 +0000 (0:00:00.718) 0:05:24.821 *** 2025-09-17 16:12:32.378282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.378288 | orchestrator | 2025-09-17 16:12:32.378294 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.378300 | orchestrator | Wednesday 17 September 2025 16:06:39 +0000 (0:00:00.506) 0:05:25.328 *** 2025-09-17 16:12:32.378306 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378312 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378318 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378324 | orchestrator | 2025-09-17 16:12:32.378330 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.378336 | orchestrator | Wednesday 17 September 2025 16:06:40 +0000 (0:00:00.953) 0:05:26.281 *** 2025-09-17 16:12:32.378342 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378348 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378354 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378360 | orchestrator | 2025-09-17 16:12:32.378366 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.378372 | orchestrator | Wednesday 17 September 2025 16:06:40 +0000 (0:00:00.341) 0:05:26.623 *** 2025-09-17 16:12:32.378378 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378384 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378390 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378396 | orchestrator | 2025-09-17 16:12:32.378402 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.378408 | orchestrator | Wednesday 17 September 2025 16:06:40 +0000 (0:00:00.359) 0:05:26.982 *** 2025-09-17 16:12:32.378414 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378420 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378426 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378432 | orchestrator | 2025-09-17 16:12:32.378438 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.378448 | orchestrator | Wednesday 17 September 2025 16:06:41 +0000 (0:00:00.329) 0:05:27.311 *** 2025-09-17 16:12:32.378454 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378460 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378466 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378472 | orchestrator | 2025-09-17 16:12:32.378478 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.378484 | orchestrator | Wednesday 17 September 2025 16:06:42 +0000 (0:00:01.024) 0:05:28.336 *** 2025-09-17 16:12:32.378490 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378496 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378502 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378508 | orchestrator | 2025-09-17 16:12:32.378514 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.378520 | orchestrator | Wednesday 17 September 2025 16:06:42 +0000 (0:00:00.343) 0:05:28.679 *** 2025-09-17 16:12:32.378526 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378532 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378538 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378544 | orchestrator | 2025-09-17 16:12:32.378551 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.378557 | orchestrator | Wednesday 17 September 2025 16:06:42 +0000 (0:00:00.304) 0:05:28.984 *** 2025-09-17 16:12:32.378563 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378569 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378575 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378581 | orchestrator | 2025-09-17 16:12:32.378586 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.378596 | orchestrator | Wednesday 17 September 2025 16:06:43 +0000 (0:00:00.702) 0:05:29.686 *** 2025-09-17 16:12:32.378603 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378609 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378615 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378621 | orchestrator | 2025-09-17 16:12:32.378627 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.378633 | orchestrator | Wednesday 17 September 2025 16:06:44 +0000 (0:00:01.209) 0:05:30.896 *** 2025-09-17 16:12:32.378639 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378645 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378651 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378657 | orchestrator | 2025-09-17 16:12:32.378663 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.378669 | orchestrator | Wednesday 17 September 2025 16:06:45 +0000 (0:00:00.390) 0:05:31.287 *** 2025-09-17 16:12:32.378675 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378681 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378687 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378693 | orchestrator | 2025-09-17 16:12:32.378699 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.378705 | orchestrator | Wednesday 17 September 2025 16:06:45 +0000 (0:00:00.412) 0:05:31.699 *** 2025-09-17 16:12:32.378711 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378717 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378723 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378729 | orchestrator | 2025-09-17 16:12:32.378735 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.378741 | orchestrator | Wednesday 17 September 2025 16:06:45 +0000 (0:00:00.341) 0:05:32.041 *** 2025-09-17 16:12:32.378747 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378753 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378759 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378765 | orchestrator | 2025-09-17 16:12:32.378788 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.378795 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:00.524) 0:05:32.566 *** 2025-09-17 16:12:32.378801 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378807 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378813 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378819 | orchestrator | 2025-09-17 16:12:32.378825 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.378831 | orchestrator | Wednesday 17 September 2025 16:06:46 +0000 (0:00:00.307) 0:05:32.873 *** 2025-09-17 16:12:32.378837 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378843 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378849 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378855 | orchestrator | 2025-09-17 16:12:32.378861 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.378867 | orchestrator | Wednesday 17 September 2025 16:06:47 +0000 (0:00:00.301) 0:05:33.175 *** 2025-09-17 16:12:32.378873 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.378879 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.378884 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.378890 | orchestrator | 2025-09-17 16:12:32.378896 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.378903 | orchestrator | Wednesday 17 September 2025 16:06:47 +0000 (0:00:00.328) 0:05:33.503 *** 2025-09-17 16:12:32.378909 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378915 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378921 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378927 | orchestrator | 2025-09-17 16:12:32.378933 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.378939 | orchestrator | Wednesday 17 September 2025 16:06:47 +0000 (0:00:00.539) 0:05:34.043 *** 2025-09-17 16:12:32.378949 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378955 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378961 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.378967 | orchestrator | 2025-09-17 16:12:32.378974 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.378980 | orchestrator | Wednesday 17 September 2025 16:06:48 +0000 (0:00:00.346) 0:05:34.389 *** 2025-09-17 16:12:32.378986 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.378992 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.378998 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.379004 | orchestrator | 2025-09-17 16:12:32.379010 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-17 16:12:32.379016 | orchestrator | Wednesday 17 September 2025 16:06:48 +0000 (0:00:00.548) 0:05:34.938 *** 2025-09-17 16:12:32.379027 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:12:32.379034 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.379040 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.379046 | orchestrator | 2025-09-17 16:12:32.379052 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-17 16:12:32.379058 | orchestrator | Wednesday 17 September 2025 16:06:49 +0000 (0:00:00.857) 0:05:35.795 *** 2025-09-17 16:12:32.379064 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.379070 | orchestrator | 2025-09-17 16:12:32.379076 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-17 16:12:32.379082 | orchestrator | Wednesday 17 September 2025 16:06:50 +0000 (0:00:00.757) 0:05:36.553 *** 2025-09-17 16:12:32.379088 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379094 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379100 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379107 | orchestrator | 2025-09-17 16:12:32.379113 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-17 16:12:32.379119 | orchestrator | Wednesday 17 September 2025 16:06:51 +0000 (0:00:00.809) 0:05:37.363 *** 2025-09-17 16:12:32.379125 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379131 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379137 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.379143 | orchestrator | 2025-09-17 16:12:32.379149 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-17 16:12:32.379155 | orchestrator | Wednesday 17 September 2025 16:06:51 +0000 (0:00:00.346) 0:05:37.710 *** 2025-09-17 16:12:32.379161 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.379167 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.379174 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.379180 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-17 16:12:32.379186 | orchestrator | 2025-09-17 16:12:32.379192 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-17 16:12:32.379233 | orchestrator | Wednesday 17 September 2025 16:07:02 +0000 (0:00:11.179) 0:05:48.890 *** 2025-09-17 16:12:32.379239 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.379245 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.379251 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.379257 | orchestrator | 2025-09-17 16:12:32.379263 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-17 16:12:32.379269 | orchestrator | Wednesday 17 September 2025 16:07:03 +0000 (0:00:00.536) 0:05:49.426 *** 2025-09-17 16:12:32.379276 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 16:12:32.379282 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 16:12:32.379287 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 16:12:32.379292 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.379302 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.379307 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.379312 | orchestrator | 2025-09-17 16:12:32.379318 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-17 16:12:32.379340 | orchestrator | Wednesday 17 September 2025 16:07:05 +0000 (0:00:02.146) 0:05:51.572 *** 2025-09-17 16:12:32.379347 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 16:12:32.379352 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 16:12:32.379357 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 16:12:32.379362 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:12:32.379368 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-17 16:12:32.379373 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-17 16:12:32.379378 | orchestrator | 2025-09-17 16:12:32.379384 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-17 16:12:32.379389 | orchestrator | Wednesday 17 September 2025 16:07:06 +0000 (0:00:01.372) 0:05:52.945 *** 2025-09-17 16:12:32.379394 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.379399 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.379405 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.379410 | orchestrator | 2025-09-17 16:12:32.379415 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-17 16:12:32.379420 | orchestrator | Wednesday 17 September 2025 16:07:07 +0000 (0:00:00.632) 0:05:53.577 *** 2025-09-17 16:12:32.379426 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379431 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379436 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.379441 | orchestrator | 2025-09-17 16:12:32.379447 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-17 16:12:32.379452 | orchestrator | Wednesday 17 September 2025 16:07:07 +0000 (0:00:00.415) 0:05:53.993 *** 2025-09-17 16:12:32.379457 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379462 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379467 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.379473 | orchestrator | 2025-09-17 16:12:32.379478 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-17 16:12:32.379483 | orchestrator | Wednesday 17 September 2025 16:07:08 +0000 (0:00:00.251) 0:05:54.244 *** 2025-09-17 16:12:32.379488 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.379494 | orchestrator | 2025-09-17 16:12:32.379499 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-17 16:12:32.379504 | orchestrator | Wednesday 17 September 2025 16:07:08 +0000 (0:00:00.483) 0:05:54.728 *** 2025-09-17 16:12:32.379510 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379515 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379521 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.379526 | orchestrator | 2025-09-17 16:12:32.379534 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-17 16:12:32.379540 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:00.429) 0:05:55.158 *** 2025-09-17 16:12:32.379545 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379550 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379555 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.379561 | orchestrator | 2025-09-17 16:12:32.379566 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-17 16:12:32.379571 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:00.323) 0:05:55.482 *** 2025-09-17 16:12:32.379577 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.379582 | orchestrator | 2025-09-17 16:12:32.379587 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-17 16:12:32.379596 | orchestrator | Wednesday 17 September 2025 16:07:09 +0000 (0:00:00.435) 0:05:55.917 *** 2025-09-17 16:12:32.379602 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379607 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379612 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379617 | orchestrator | 2025-09-17 16:12:32.379623 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-17 16:12:32.379628 | orchestrator | Wednesday 17 September 2025 16:07:11 +0000 (0:00:01.360) 0:05:57.278 *** 2025-09-17 16:12:32.379633 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379639 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379644 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379649 | orchestrator | 2025-09-17 16:12:32.379654 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-17 16:12:32.379660 | orchestrator | Wednesday 17 September 2025 16:07:12 +0000 (0:00:01.075) 0:05:58.353 *** 2025-09-17 16:12:32.379665 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379670 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379675 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379680 | orchestrator | 2025-09-17 16:12:32.379686 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-17 16:12:32.379691 | orchestrator | Wednesday 17 September 2025 16:07:13 +0000 (0:00:01.708) 0:06:00.061 *** 2025-09-17 16:12:32.379696 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379701 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379707 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379712 | orchestrator | 2025-09-17 16:12:32.379717 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-17 16:12:32.379722 | orchestrator | Wednesday 17 September 2025 16:07:15 +0000 (0:00:01.878) 0:06:01.940 *** 2025-09-17 16:12:32.379728 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.379733 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.379738 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-17 16:12:32.379743 | orchestrator | 2025-09-17 16:12:32.379749 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-17 16:12:32.379754 | orchestrator | Wednesday 17 September 2025 16:07:16 +0000 (0:00:00.760) 0:06:02.701 *** 2025-09-17 16:12:32.379759 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-17 16:12:32.379764 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-17 16:12:32.379784 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-17 16:12:32.379791 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-17 16:12:32.379796 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-17 16:12:32.379801 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-17 16:12:32.379806 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.379812 | orchestrator | 2025-09-17 16:12:32.379817 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-17 16:12:32.379823 | orchestrator | Wednesday 17 September 2025 16:07:52 +0000 (0:00:36.285) 0:06:38.986 *** 2025-09-17 16:12:32.379828 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.379833 | orchestrator | 2025-09-17 16:12:32.379839 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-17 16:12:32.379844 | orchestrator | Wednesday 17 September 2025 16:07:54 +0000 (0:00:01.383) 0:06:40.369 *** 2025-09-17 16:12:32.379849 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.379854 | orchestrator | 2025-09-17 16:12:32.379859 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-17 16:12:32.379868 | orchestrator | Wednesday 17 September 2025 16:07:54 +0000 (0:00:00.299) 0:06:40.669 *** 2025-09-17 16:12:32.379874 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.379879 | orchestrator | 2025-09-17 16:12:32.379884 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-17 16:12:32.379889 | orchestrator | Wednesday 17 September 2025 16:07:54 +0000 (0:00:00.144) 0:06:40.813 *** 2025-09-17 16:12:32.379895 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-17 16:12:32.379900 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-17 16:12:32.379905 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-17 16:12:32.379910 | orchestrator | 2025-09-17 16:12:32.379916 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-17 16:12:32.379921 | orchestrator | Wednesday 17 September 2025 16:08:01 +0000 (0:00:06.379) 0:06:47.192 *** 2025-09-17 16:12:32.379929 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-17 16:12:32.379935 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-17 16:12:32.379940 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-17 16:12:32.379945 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-17 16:12:32.379950 | orchestrator | 2025-09-17 16:12:32.379956 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.379961 | orchestrator | Wednesday 17 September 2025 16:08:05 +0000 (0:00:04.810) 0:06:52.002 *** 2025-09-17 16:12:32.379966 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.379971 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.379977 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.379982 | orchestrator | 2025-09-17 16:12:32.379987 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-17 16:12:32.379992 | orchestrator | Wednesday 17 September 2025 16:08:06 +0000 (0:00:00.682) 0:06:52.685 *** 2025-09-17 16:12:32.379998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:32.380003 | orchestrator | 2025-09-17 16:12:32.380008 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-17 16:12:32.380014 | orchestrator | Wednesday 17 September 2025 16:08:07 +0000 (0:00:00.521) 0:06:53.206 *** 2025-09-17 16:12:32.380019 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.380024 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.380029 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.380035 | orchestrator | 2025-09-17 16:12:32.380040 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-17 16:12:32.380045 | orchestrator | Wednesday 17 September 2025 16:08:07 +0000 (0:00:00.563) 0:06:53.770 *** 2025-09-17 16:12:32.380050 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.380056 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.380061 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.380066 | orchestrator | 2025-09-17 16:12:32.380071 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-17 16:12:32.380077 | orchestrator | Wednesday 17 September 2025 16:08:08 +0000 (0:00:01.160) 0:06:54.930 *** 2025-09-17 16:12:32.380082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-17 16:12:32.380087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-17 16:12:32.380092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-17 16:12:32.380098 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.380103 | orchestrator | 2025-09-17 16:12:32.380108 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-17 16:12:32.380113 | orchestrator | Wednesday 17 September 2025 16:08:09 +0000 (0:00:00.580) 0:06:55.511 *** 2025-09-17 16:12:32.380119 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.380127 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.380133 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.380138 | orchestrator | 2025-09-17 16:12:32.380143 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-17 16:12:32.380149 | orchestrator | 2025-09-17 16:12:32.380154 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.380159 | orchestrator | Wednesday 17 September 2025 16:08:10 +0000 (0:00:00.833) 0:06:56.345 *** 2025-09-17 16:12:32.380179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.380185 | orchestrator | 2025-09-17 16:12:32.380191 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.380207 | orchestrator | Wednesday 17 September 2025 16:08:10 +0000 (0:00:00.493) 0:06:56.838 *** 2025-09-17 16:12:32.380212 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.380218 | orchestrator | 2025-09-17 16:12:32.380223 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.380228 | orchestrator | Wednesday 17 September 2025 16:08:11 +0000 (0:00:00.680) 0:06:57.519 *** 2025-09-17 16:12:32.380233 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380239 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380244 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380249 | orchestrator | 2025-09-17 16:12:32.380254 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.380260 | orchestrator | Wednesday 17 September 2025 16:08:11 +0000 (0:00:00.312) 0:06:57.832 *** 2025-09-17 16:12:32.380265 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380270 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380276 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380281 | orchestrator | 2025-09-17 16:12:32.380286 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.380291 | orchestrator | Wednesday 17 September 2025 16:08:12 +0000 (0:00:00.699) 0:06:58.531 *** 2025-09-17 16:12:32.380297 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380302 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380307 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380313 | orchestrator | 2025-09-17 16:12:32.380318 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.380323 | orchestrator | Wednesday 17 September 2025 16:08:13 +0000 (0:00:00.743) 0:06:59.275 *** 2025-09-17 16:12:32.380329 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380334 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380339 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380344 | orchestrator | 2025-09-17 16:12:32.380350 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.380355 | orchestrator | Wednesday 17 September 2025 16:08:14 +0000 (0:00:01.009) 0:07:00.284 *** 2025-09-17 16:12:32.380360 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380365 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380371 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380376 | orchestrator | 2025-09-17 16:12:32.380381 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.380390 | orchestrator | Wednesday 17 September 2025 16:08:14 +0000 (0:00:00.310) 0:07:00.594 *** 2025-09-17 16:12:32.380395 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380400 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380406 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380411 | orchestrator | 2025-09-17 16:12:32.380416 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.380421 | orchestrator | Wednesday 17 September 2025 16:08:14 +0000 (0:00:00.301) 0:07:00.896 *** 2025-09-17 16:12:32.380427 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380436 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380441 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380446 | orchestrator | 2025-09-17 16:12:32.380451 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.380457 | orchestrator | Wednesday 17 September 2025 16:08:15 +0000 (0:00:00.317) 0:07:01.214 *** 2025-09-17 16:12:32.380462 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380468 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380473 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380478 | orchestrator | 2025-09-17 16:12:32.380484 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.380489 | orchestrator | Wednesday 17 September 2025 16:08:16 +0000 (0:00:00.965) 0:07:02.180 *** 2025-09-17 16:12:32.380494 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380499 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380505 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380510 | orchestrator | 2025-09-17 16:12:32.380515 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.380521 | orchestrator | Wednesday 17 September 2025 16:08:16 +0000 (0:00:00.656) 0:07:02.837 *** 2025-09-17 16:12:32.380526 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380531 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380536 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380542 | orchestrator | 2025-09-17 16:12:32.380547 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.380552 | orchestrator | Wednesday 17 September 2025 16:08:17 +0000 (0:00:00.295) 0:07:03.132 *** 2025-09-17 16:12:32.380558 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380563 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380568 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380573 | orchestrator | 2025-09-17 16:12:32.380578 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.380584 | orchestrator | Wednesday 17 September 2025 16:08:17 +0000 (0:00:00.278) 0:07:03.411 *** 2025-09-17 16:12:32.380589 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380594 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380600 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380605 | orchestrator | 2025-09-17 16:12:32.380610 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.380615 | orchestrator | Wednesday 17 September 2025 16:08:17 +0000 (0:00:00.544) 0:07:03.956 *** 2025-09-17 16:12:32.380621 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380626 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380631 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380636 | orchestrator | 2025-09-17 16:12:32.380642 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.380647 | orchestrator | Wednesday 17 September 2025 16:08:18 +0000 (0:00:00.309) 0:07:04.265 *** 2025-09-17 16:12:32.380652 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380658 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380663 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380668 | orchestrator | 2025-09-17 16:12:32.380688 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.380694 | orchestrator | Wednesday 17 September 2025 16:08:18 +0000 (0:00:00.333) 0:07:04.599 *** 2025-09-17 16:12:32.380700 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380705 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380710 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380715 | orchestrator | 2025-09-17 16:12:32.380721 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.380726 | orchestrator | Wednesday 17 September 2025 16:08:18 +0000 (0:00:00.299) 0:07:04.898 *** 2025-09-17 16:12:32.380731 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380736 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380742 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380750 | orchestrator | 2025-09-17 16:12:32.380756 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.380761 | orchestrator | Wednesday 17 September 2025 16:08:19 +0000 (0:00:00.313) 0:07:05.211 *** 2025-09-17 16:12:32.380766 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380772 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380777 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380782 | orchestrator | 2025-09-17 16:12:32.380787 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.380792 | orchestrator | Wednesday 17 September 2025 16:08:19 +0000 (0:00:00.557) 0:07:05.768 *** 2025-09-17 16:12:32.380798 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380803 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380808 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380813 | orchestrator | 2025-09-17 16:12:32.380819 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.380824 | orchestrator | Wednesday 17 September 2025 16:08:20 +0000 (0:00:00.335) 0:07:06.104 *** 2025-09-17 16:12:32.380829 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380834 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380840 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380845 | orchestrator | 2025-09-17 16:12:32.380850 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-17 16:12:32.380855 | orchestrator | Wednesday 17 September 2025 16:08:20 +0000 (0:00:00.558) 0:07:06.663 *** 2025-09-17 16:12:32.380861 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.380866 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.380871 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.380876 | orchestrator | 2025-09-17 16:12:32.380882 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-17 16:12:32.380887 | orchestrator | Wednesday 17 September 2025 16:08:21 +0000 (0:00:00.578) 0:07:07.242 *** 2025-09-17 16:12:32.380895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 16:12:32.380900 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:12:32.380906 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:12:32.380911 | orchestrator | 2025-09-17 16:12:32.380916 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-17 16:12:32.380921 | orchestrator | Wednesday 17 September 2025 16:08:21 +0000 (0:00:00.581) 0:07:07.823 *** 2025-09-17 16:12:32.380926 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.380932 | orchestrator | 2025-09-17 16:12:32.380937 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-17 16:12:32.380942 | orchestrator | Wednesday 17 September 2025 16:08:22 +0000 (0:00:00.612) 0:07:08.435 *** 2025-09-17 16:12:32.380948 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380953 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380958 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380963 | orchestrator | 2025-09-17 16:12:32.380969 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-17 16:12:32.380974 | orchestrator | Wednesday 17 September 2025 16:08:22 +0000 (0:00:00.368) 0:07:08.804 *** 2025-09-17 16:12:32.380979 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.380984 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.380990 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.380995 | orchestrator | 2025-09-17 16:12:32.381000 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-17 16:12:32.381005 | orchestrator | Wednesday 17 September 2025 16:08:22 +0000 (0:00:00.260) 0:07:09.064 *** 2025-09-17 16:12:32.381011 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.381016 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.381021 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.381030 | orchestrator | 2025-09-17 16:12:32.381035 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-17 16:12:32.381040 | orchestrator | Wednesday 17 September 2025 16:08:23 +0000 (0:00:00.585) 0:07:09.650 *** 2025-09-17 16:12:32.381046 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.381051 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.381056 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.381061 | orchestrator | 2025-09-17 16:12:32.381066 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-17 16:12:32.381072 | orchestrator | Wednesday 17 September 2025 16:08:23 +0000 (0:00:00.286) 0:07:09.937 *** 2025-09-17 16:12:32.381077 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 16:12:32.381082 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 16:12:32.381088 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-17 16:12:32.381093 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 16:12:32.381098 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 16:12:32.381104 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 16:12:32.381112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 16:12:32.381117 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 16:12:32.381123 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 16:12:32.381128 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 16:12:32.381133 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 16:12:32.381138 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-17 16:12:32.381143 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-17 16:12:32.381149 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-17 16:12:32.381154 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-17 16:12:32.381159 | orchestrator | 2025-09-17 16:12:32.381164 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-17 16:12:32.381170 | orchestrator | Wednesday 17 September 2025 16:08:28 +0000 (0:00:04.215) 0:07:14.153 *** 2025-09-17 16:12:32.381175 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381180 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381186 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.381191 | orchestrator | 2025-09-17 16:12:32.381210 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-17 16:12:32.381215 | orchestrator | Wednesday 17 September 2025 16:08:28 +0000 (0:00:00.251) 0:07:14.404 *** 2025-09-17 16:12:32.381221 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.381226 | orchestrator | 2025-09-17 16:12:32.381231 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-17 16:12:32.381237 | orchestrator | Wednesday 17 September 2025 16:08:28 +0000 (0:00:00.450) 0:07:14.854 *** 2025-09-17 16:12:32.381242 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 16:12:32.381247 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 16:12:32.381255 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-17 16:12:32.381261 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-17 16:12:32.381266 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-17 16:12:32.381275 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-17 16:12:32.381281 | orchestrator | 2025-09-17 16:12:32.381286 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-17 16:12:32.381292 | orchestrator | Wednesday 17 September 2025 16:08:29 +0000 (0:00:01.084) 0:07:15.939 *** 2025-09-17 16:12:32.381297 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.381302 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.381308 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.381313 | orchestrator | 2025-09-17 16:12:32.381318 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-17 16:12:32.381323 | orchestrator | Wednesday 17 September 2025 16:08:31 +0000 (0:00:01.973) 0:07:17.913 *** 2025-09-17 16:12:32.381329 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:12:32.381334 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.381339 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.381345 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:12:32.381350 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 16:12:32.381355 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.381361 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:12:32.381366 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 16:12:32.381371 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.381376 | orchestrator | 2025-09-17 16:12:32.381382 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-17 16:12:32.381387 | orchestrator | Wednesday 17 September 2025 16:08:32 +0000 (0:00:01.078) 0:07:18.991 *** 2025-09-17 16:12:32.381392 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.381398 | orchestrator | 2025-09-17 16:12:32.381403 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-17 16:12:32.381409 | orchestrator | Wednesday 17 September 2025 16:08:34 +0000 (0:00:02.007) 0:07:20.999 *** 2025-09-17 16:12:32.381414 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.381419 | orchestrator | 2025-09-17 16:12:32.381425 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-17 16:12:32.381430 | orchestrator | Wednesday 17 September 2025 16:08:35 +0000 (0:00:00.573) 0:07:21.572 *** 2025-09-17 16:12:32.381435 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133', 'data_vg': 'ceph-2618dc29-ef9a-5981-b8ae-0a6fa7f1f133'}) 2025-09-17 16:12:32.381441 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3c66c71d-5352-5b3e-b37c-d5d685617e79', 'data_vg': 'ceph-3c66c71d-5352-5b3e-b37c-d5d685617e79'}) 2025-09-17 16:12:32.381447 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-17f552da-d70b-5fe0-b76a-79be1323ddb4', 'data_vg': 'ceph-17f552da-d70b-5fe0-b76a-79be1323ddb4'}) 2025-09-17 16:12:32.381457 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ce5409dd-a4db-5391-81df-07600c6136f3', 'data_vg': 'ceph-ce5409dd-a4db-5391-81df-07600c6136f3'}) 2025-09-17 16:12:32.381463 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c', 'data_vg': 'ceph-e55f2ffc-2f4d-55e1-8c19-2e9977a4942c'}) 2025-09-17 16:12:32.381468 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d72d4826-7802-5629-b85e-59298af53c3a', 'data_vg': 'ceph-d72d4826-7802-5629-b85e-59298af53c3a'}) 2025-09-17 16:12:32.381473 | orchestrator | 2025-09-17 16:12:32.381479 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-17 16:12:32.381484 | orchestrator | Wednesday 17 September 2025 16:09:19 +0000 (0:00:43.701) 0:08:05.273 *** 2025-09-17 16:12:32.381489 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381495 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381503 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.381508 | orchestrator | 2025-09-17 16:12:32.381514 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-17 16:12:32.381519 | orchestrator | Wednesday 17 September 2025 16:09:19 +0000 (0:00:00.326) 0:08:05.600 *** 2025-09-17 16:12:32.381524 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.381530 | orchestrator | 2025-09-17 16:12:32.381535 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-17 16:12:32.381540 | orchestrator | Wednesday 17 September 2025 16:09:20 +0000 (0:00:00.502) 0:08:06.102 *** 2025-09-17 16:12:32.381545 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.381551 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.381556 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.381561 | orchestrator | 2025-09-17 16:12:32.381567 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-17 16:12:32.381572 | orchestrator | Wednesday 17 September 2025 16:09:20 +0000 (0:00:00.967) 0:08:07.069 *** 2025-09-17 16:12:32.381577 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.381583 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.381588 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.381593 | orchestrator | 2025-09-17 16:12:32.381598 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-17 16:12:32.381604 | orchestrator | Wednesday 17 September 2025 16:09:23 +0000 (0:00:02.605) 0:08:09.674 *** 2025-09-17 16:12:32.381613 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.381619 | orchestrator | 2025-09-17 16:12:32.381624 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-17 16:12:32.381629 | orchestrator | Wednesday 17 September 2025 16:09:24 +0000 (0:00:00.549) 0:08:10.224 *** 2025-09-17 16:12:32.381635 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.381640 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.381645 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.381651 | orchestrator | 2025-09-17 16:12:32.381656 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-17 16:12:32.381661 | orchestrator | Wednesday 17 September 2025 16:09:25 +0000 (0:00:01.450) 0:08:11.675 *** 2025-09-17 16:12:32.381667 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.381672 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.381677 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.381682 | orchestrator | 2025-09-17 16:12:32.381688 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-17 16:12:32.381693 | orchestrator | Wednesday 17 September 2025 16:09:26 +0000 (0:00:01.151) 0:08:12.826 *** 2025-09-17 16:12:32.381698 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.381704 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.381709 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.381714 | orchestrator | 2025-09-17 16:12:32.381719 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-17 16:12:32.381725 | orchestrator | Wednesday 17 September 2025 16:09:28 +0000 (0:00:01.795) 0:08:14.621 *** 2025-09-17 16:12:32.381730 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381735 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381741 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.381746 | orchestrator | 2025-09-17 16:12:32.381751 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-17 16:12:32.381756 | orchestrator | Wednesday 17 September 2025 16:09:28 +0000 (0:00:00.315) 0:08:14.937 *** 2025-09-17 16:12:32.381762 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381767 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381772 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.381777 | orchestrator | 2025-09-17 16:12:32.381783 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-17 16:12:32.381791 | orchestrator | Wednesday 17 September 2025 16:09:29 +0000 (0:00:00.535) 0:08:15.472 *** 2025-09-17 16:12:32.381797 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-17 16:12:32.381802 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-17 16:12:32.381807 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-17 16:12:32.381813 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 16:12:32.381818 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-17 16:12:32.381823 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-17 16:12:32.381828 | orchestrator | 2025-09-17 16:12:32.381834 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-17 16:12:32.381839 | orchestrator | Wednesday 17 September 2025 16:09:30 +0000 (0:00:01.023) 0:08:16.495 *** 2025-09-17 16:12:32.381844 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-17 16:12:32.381850 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-17 16:12:32.381855 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-17 16:12:32.381860 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-17 16:12:32.381866 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-17 16:12:32.381871 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-17 16:12:32.381876 | orchestrator | 2025-09-17 16:12:32.381884 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-17 16:12:32.381890 | orchestrator | Wednesday 17 September 2025 16:09:32 +0000 (0:00:02.280) 0:08:18.776 *** 2025-09-17 16:12:32.381895 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-17 16:12:32.381900 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-17 16:12:32.381905 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-17 16:12:32.381911 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-17 16:12:32.381916 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-17 16:12:32.381921 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-17 16:12:32.381926 | orchestrator | 2025-09-17 16:12:32.381932 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-17 16:12:32.381937 | orchestrator | Wednesday 17 September 2025 16:09:36 +0000 (0:00:03.559) 0:08:22.335 *** 2025-09-17 16:12:32.381942 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381947 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381953 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.381958 | orchestrator | 2025-09-17 16:12:32.381963 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-17 16:12:32.381968 | orchestrator | Wednesday 17 September 2025 16:09:39 +0000 (0:00:02.904) 0:08:25.239 *** 2025-09-17 16:12:32.381974 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.381979 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.381984 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-17 16:12:32.381990 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.381995 | orchestrator | 2025-09-17 16:12:32.382000 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-17 16:12:32.382006 | orchestrator | Wednesday 17 September 2025 16:09:51 +0000 (0:00:12.678) 0:08:37.918 *** 2025-09-17 16:12:32.382011 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382036 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382041 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382046 | orchestrator | 2025-09-17 16:12:32.382052 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.382057 | orchestrator | Wednesday 17 September 2025 16:09:52 +0000 (0:00:00.846) 0:08:38.765 *** 2025-09-17 16:12:32.382062 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382068 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382073 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382078 | orchestrator | 2025-09-17 16:12:32.382087 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-17 16:12:32.382096 | orchestrator | Wednesday 17 September 2025 16:09:52 +0000 (0:00:00.301) 0:08:39.066 *** 2025-09-17 16:12:32.382101 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.382107 | orchestrator | 2025-09-17 16:12:32.382112 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-17 16:12:32.382117 | orchestrator | Wednesday 17 September 2025 16:09:53 +0000 (0:00:00.465) 0:08:39.532 *** 2025-09-17 16:12:32.382122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.382128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.382133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.382138 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382144 | orchestrator | 2025-09-17 16:12:32.382149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-17 16:12:32.382154 | orchestrator | Wednesday 17 September 2025 16:09:54 +0000 (0:00:00.668) 0:08:40.201 *** 2025-09-17 16:12:32.382160 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382165 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382170 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382175 | orchestrator | 2025-09-17 16:12:32.382181 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-17 16:12:32.382186 | orchestrator | Wednesday 17 September 2025 16:09:54 +0000 (0:00:00.258) 0:08:40.459 *** 2025-09-17 16:12:32.382191 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382210 | orchestrator | 2025-09-17 16:12:32.382215 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-17 16:12:32.382220 | orchestrator | Wednesday 17 September 2025 16:09:54 +0000 (0:00:00.196) 0:08:40.655 *** 2025-09-17 16:12:32.382226 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382231 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382236 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382241 | orchestrator | 2025-09-17 16:12:32.382247 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-17 16:12:32.382252 | orchestrator | Wednesday 17 September 2025 16:09:54 +0000 (0:00:00.278) 0:08:40.934 *** 2025-09-17 16:12:32.382257 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382263 | orchestrator | 2025-09-17 16:12:32.382268 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-17 16:12:32.382273 | orchestrator | Wednesday 17 September 2025 16:09:55 +0000 (0:00:00.209) 0:08:41.143 *** 2025-09-17 16:12:32.382279 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382284 | orchestrator | 2025-09-17 16:12:32.382289 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-17 16:12:32.382295 | orchestrator | Wednesday 17 September 2025 16:09:55 +0000 (0:00:00.209) 0:08:41.353 *** 2025-09-17 16:12:32.382300 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382305 | orchestrator | 2025-09-17 16:12:32.382310 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-17 16:12:32.382316 | orchestrator | Wednesday 17 September 2025 16:09:55 +0000 (0:00:00.117) 0:08:41.470 *** 2025-09-17 16:12:32.382321 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382326 | orchestrator | 2025-09-17 16:12:32.382332 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-17 16:12:32.382340 | orchestrator | Wednesday 17 September 2025 16:09:55 +0000 (0:00:00.195) 0:08:41.666 *** 2025-09-17 16:12:32.382346 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382351 | orchestrator | 2025-09-17 16:12:32.382357 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-17 16:12:32.382362 | orchestrator | Wednesday 17 September 2025 16:09:56 +0000 (0:00:00.564) 0:08:42.231 *** 2025-09-17 16:12:32.382367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.382373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.382382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.382387 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382393 | orchestrator | 2025-09-17 16:12:32.382398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-17 16:12:32.382403 | orchestrator | Wednesday 17 September 2025 16:09:56 +0000 (0:00:00.391) 0:08:42.622 *** 2025-09-17 16:12:32.382409 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382414 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382419 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382424 | orchestrator | 2025-09-17 16:12:32.382430 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-17 16:12:32.382435 | orchestrator | Wednesday 17 September 2025 16:09:56 +0000 (0:00:00.266) 0:08:42.889 *** 2025-09-17 16:12:32.382441 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382446 | orchestrator | 2025-09-17 16:12:32.382451 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-17 16:12:32.382456 | orchestrator | Wednesday 17 September 2025 16:09:57 +0000 (0:00:00.209) 0:08:43.099 *** 2025-09-17 16:12:32.382462 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382467 | orchestrator | 2025-09-17 16:12:32.382472 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-17 16:12:32.382477 | orchestrator | 2025-09-17 16:12:32.382483 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.382488 | orchestrator | Wednesday 17 September 2025 16:09:57 +0000 (0:00:00.628) 0:08:43.727 *** 2025-09-17 16:12:32.382493 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.382499 | orchestrator | 2025-09-17 16:12:32.382504 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.382509 | orchestrator | Wednesday 17 September 2025 16:09:58 +0000 (0:00:01.150) 0:08:44.877 *** 2025-09-17 16:12:32.382517 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.382523 | orchestrator | 2025-09-17 16:12:32.382528 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.382534 | orchestrator | Wednesday 17 September 2025 16:09:59 +0000 (0:00:01.168) 0:08:46.046 *** 2025-09-17 16:12:32.382539 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.382544 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382550 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382555 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.382560 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.382566 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382571 | orchestrator | 2025-09-17 16:12:32.382576 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.382582 | orchestrator | Wednesday 17 September 2025 16:10:00 +0000 (0:00:00.964) 0:08:47.011 *** 2025-09-17 16:12:32.382587 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382592 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382598 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382603 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.382608 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.382614 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.382619 | orchestrator | 2025-09-17 16:12:32.382624 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.382630 | orchestrator | Wednesday 17 September 2025 16:10:01 +0000 (0:00:00.984) 0:08:47.996 *** 2025-09-17 16:12:32.382635 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382640 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382646 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382654 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.382660 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.382665 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.382670 | orchestrator | 2025-09-17 16:12:32.382676 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.382681 | orchestrator | Wednesday 17 September 2025 16:10:03 +0000 (0:00:01.252) 0:08:49.248 *** 2025-09-17 16:12:32.382686 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382692 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382697 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382702 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.382708 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.382713 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.382718 | orchestrator | 2025-09-17 16:12:32.382723 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.382729 | orchestrator | Wednesday 17 September 2025 16:10:04 +0000 (0:00:01.026) 0:08:50.274 *** 2025-09-17 16:12:32.382734 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.382739 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382744 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382750 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382755 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.382760 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.382765 | orchestrator | 2025-09-17 16:12:32.382771 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.382776 | orchestrator | Wednesday 17 September 2025 16:10:05 +0000 (0:00:00.981) 0:08:51.255 *** 2025-09-17 16:12:32.382782 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382787 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382792 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382797 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382805 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382811 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382817 | orchestrator | 2025-09-17 16:12:32.382822 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.382827 | orchestrator | Wednesday 17 September 2025 16:10:05 +0000 (0:00:00.582) 0:08:51.838 *** 2025-09-17 16:12:32.382833 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382838 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382843 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382848 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.382854 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.382859 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.382864 | orchestrator | 2025-09-17 16:12:32.382869 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.382875 | orchestrator | Wednesday 17 September 2025 16:10:06 +0000 (0:00:00.711) 0:08:52.550 *** 2025-09-17 16:12:32.382880 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.382885 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.382891 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.382896 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.382901 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.382906 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.382912 | orchestrator | 2025-09-17 16:12:32.382917 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.382922 | orchestrator | Wednesday 17 September 2025 16:10:07 +0000 (0:00:00.944) 0:08:53.495 *** 2025-09-17 16:12:32.382928 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.382933 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.382938 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.382943 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.382949 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.382954 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.382959 | orchestrator | 2025-09-17 16:12:32.382965 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.382975 | orchestrator | Wednesday 17 September 2025 16:10:08 +0000 (0:00:01.102) 0:08:54.597 *** 2025-09-17 16:12:32.382980 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.382986 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.382991 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.382996 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383001 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383007 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383012 | orchestrator | 2025-09-17 16:12:32.383017 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.383022 | orchestrator | Wednesday 17 September 2025 16:10:09 +0000 (0:00:00.503) 0:08:55.101 *** 2025-09-17 16:12:32.383030 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383036 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383041 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383046 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383051 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383057 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383062 | orchestrator | 2025-09-17 16:12:32.383067 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.383073 | orchestrator | Wednesday 17 September 2025 16:10:09 +0000 (0:00:00.670) 0:08:55.771 *** 2025-09-17 16:12:32.383078 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.383083 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.383089 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.383094 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383099 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383104 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383110 | orchestrator | 2025-09-17 16:12:32.383115 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.383120 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:00.535) 0:08:56.306 *** 2025-09-17 16:12:32.383126 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.383131 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.383136 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.383141 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383147 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383152 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383157 | orchestrator | 2025-09-17 16:12:32.383163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.383168 | orchestrator | Wednesday 17 September 2025 16:10:10 +0000 (0:00:00.681) 0:08:56.988 *** 2025-09-17 16:12:32.383173 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.383178 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.383184 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.383189 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383222 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383228 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383233 | orchestrator | 2025-09-17 16:12:32.383239 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.383244 | orchestrator | Wednesday 17 September 2025 16:10:11 +0000 (0:00:00.517) 0:08:57.505 *** 2025-09-17 16:12:32.383249 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.383254 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.383260 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.383265 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383270 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383275 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383281 | orchestrator | 2025-09-17 16:12:32.383286 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.383292 | orchestrator | Wednesday 17 September 2025 16:10:12 +0000 (0:00:00.647) 0:08:58.153 *** 2025-09-17 16:12:32.383297 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:32.383302 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:32.383312 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:32.383317 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383322 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383327 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383333 | orchestrator | 2025-09-17 16:12:32.383338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.383343 | orchestrator | Wednesday 17 September 2025 16:10:12 +0000 (0:00:00.495) 0:08:58.649 *** 2025-09-17 16:12:32.383348 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383354 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383359 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383367 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383373 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383378 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383383 | orchestrator | 2025-09-17 16:12:32.383389 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.383394 | orchestrator | Wednesday 17 September 2025 16:10:13 +0000 (0:00:00.691) 0:08:59.341 *** 2025-09-17 16:12:32.383399 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383405 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383410 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383415 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383420 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383426 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383431 | orchestrator | 2025-09-17 16:12:32.383436 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.383442 | orchestrator | Wednesday 17 September 2025 16:10:13 +0000 (0:00:00.516) 0:08:59.857 *** 2025-09-17 16:12:32.383447 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383452 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383457 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383463 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383468 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383473 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383478 | orchestrator | 2025-09-17 16:12:32.383484 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-17 16:12:32.383489 | orchestrator | Wednesday 17 September 2025 16:10:14 +0000 (0:00:01.147) 0:09:01.005 *** 2025-09-17 16:12:32.383494 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.383500 | orchestrator | 2025-09-17 16:12:32.383505 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-17 16:12:32.383510 | orchestrator | Wednesday 17 September 2025 16:10:19 +0000 (0:00:04.151) 0:09:05.156 *** 2025-09-17 16:12:32.383516 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383521 | orchestrator | 2025-09-17 16:12:32.383526 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-17 16:12:32.383532 | orchestrator | Wednesday 17 September 2025 16:10:21 +0000 (0:00:02.567) 0:09:07.724 *** 2025-09-17 16:12:32.383537 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383542 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.383548 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.383553 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.383558 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.383563 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.383569 | orchestrator | 2025-09-17 16:12:32.383574 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-17 16:12:32.383582 | orchestrator | Wednesday 17 September 2025 16:10:23 +0000 (0:00:01.558) 0:09:09.283 *** 2025-09-17 16:12:32.383588 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.383593 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.383598 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.383604 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.383609 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.383614 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.383618 | orchestrator | 2025-09-17 16:12:32.383627 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-17 16:12:32.383631 | orchestrator | Wednesday 17 September 2025 16:10:24 +0000 (0:00:01.267) 0:09:10.550 *** 2025-09-17 16:12:32.383636 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-5, testbed-node-4, testbed-node-3 2025-09-17 16:12:32.383641 | orchestrator | 2025-09-17 16:12:32.383646 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-17 16:12:32.383651 | orchestrator | Wednesday 17 September 2025 16:10:25 +0000 (0:00:01.392) 0:09:11.943 *** 2025-09-17 16:12:32.383656 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.383660 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.383665 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.383670 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.383674 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.383679 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.383684 | orchestrator | 2025-09-17 16:12:32.383688 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-17 16:12:32.383693 | orchestrator | Wednesday 17 September 2025 16:10:27 +0000 (0:00:01.701) 0:09:13.645 *** 2025-09-17 16:12:32.383698 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.383702 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.383707 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.383712 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.383716 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.383721 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.383726 | orchestrator | 2025-09-17 16:12:32.383730 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-17 16:12:32.383735 | orchestrator | Wednesday 17 September 2025 16:10:30 +0000 (0:00:03.089) 0:09:16.735 *** 2025-09-17 16:12:32.383740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.383745 | orchestrator | 2025-09-17 16:12:32.383749 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-17 16:12:32.383754 | orchestrator | Wednesday 17 September 2025 16:10:31 +0000 (0:00:01.306) 0:09:18.041 *** 2025-09-17 16:12:32.383759 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383763 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383768 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383773 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383778 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383782 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383787 | orchestrator | 2025-09-17 16:12:32.383792 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-17 16:12:32.383796 | orchestrator | Wednesday 17 September 2025 16:10:32 +0000 (0:00:00.722) 0:09:18.764 *** 2025-09-17 16:12:32.383801 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:32.383806 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:32.383811 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:32.383815 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.383820 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.383827 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.383831 | orchestrator | 2025-09-17 16:12:32.383836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-17 16:12:32.383841 | orchestrator | Wednesday 17 September 2025 16:10:34 +0000 (0:00:02.011) 0:09:20.775 *** 2025-09-17 16:12:32.383846 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:32.383850 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:32.383855 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:32.383860 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383864 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383869 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383874 | orchestrator | 2025-09-17 16:12:32.383878 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-17 16:12:32.383887 | orchestrator | 2025-09-17 16:12:32.383891 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.383896 | orchestrator | Wednesday 17 September 2025 16:10:35 +0000 (0:00:00.909) 0:09:21.685 *** 2025-09-17 16:12:32.383901 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.383906 | orchestrator | 2025-09-17 16:12:32.383910 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.383915 | orchestrator | Wednesday 17 September 2025 16:10:36 +0000 (0:00:00.592) 0:09:22.278 *** 2025-09-17 16:12:32.383920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.383925 | orchestrator | 2025-09-17 16:12:32.383929 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.383934 | orchestrator | Wednesday 17 September 2025 16:10:36 +0000 (0:00:00.491) 0:09:22.770 *** 2025-09-17 16:12:32.383939 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.383944 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.383948 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.383953 | orchestrator | 2025-09-17 16:12:32.383958 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.383962 | orchestrator | Wednesday 17 September 2025 16:10:36 +0000 (0:00:00.252) 0:09:23.022 *** 2025-09-17 16:12:32.383967 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.383972 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.383976 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.383981 | orchestrator | 2025-09-17 16:12:32.383986 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.383993 | orchestrator | Wednesday 17 September 2025 16:10:37 +0000 (0:00:00.828) 0:09:23.851 *** 2025-09-17 16:12:32.383998 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384003 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384007 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384012 | orchestrator | 2025-09-17 16:12:32.384017 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.384021 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.677) 0:09:24.529 *** 2025-09-17 16:12:32.384026 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384031 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384036 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384040 | orchestrator | 2025-09-17 16:12:32.384045 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.384050 | orchestrator | Wednesday 17 September 2025 16:10:39 +0000 (0:00:00.684) 0:09:25.214 *** 2025-09-17 16:12:32.384054 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384059 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384064 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384069 | orchestrator | 2025-09-17 16:12:32.384073 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.384078 | orchestrator | Wednesday 17 September 2025 16:10:39 +0000 (0:00:00.317) 0:09:25.531 *** 2025-09-17 16:12:32.384083 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384087 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384092 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384097 | orchestrator | 2025-09-17 16:12:32.384101 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.384106 | orchestrator | Wednesday 17 September 2025 16:10:39 +0000 (0:00:00.538) 0:09:26.069 *** 2025-09-17 16:12:32.384111 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384115 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384120 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384125 | orchestrator | 2025-09-17 16:12:32.384130 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.384148 | orchestrator | Wednesday 17 September 2025 16:10:40 +0000 (0:00:00.332) 0:09:26.402 *** 2025-09-17 16:12:32.384153 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384158 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384163 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384168 | orchestrator | 2025-09-17 16:12:32.384172 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.384177 | orchestrator | Wednesday 17 September 2025 16:10:41 +0000 (0:00:00.998) 0:09:27.400 *** 2025-09-17 16:12:32.384182 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384187 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384191 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384204 | orchestrator | 2025-09-17 16:12:32.384209 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.384213 | orchestrator | Wednesday 17 September 2025 16:10:42 +0000 (0:00:00.835) 0:09:28.236 *** 2025-09-17 16:12:32.384218 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384223 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384227 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384232 | orchestrator | 2025-09-17 16:12:32.384237 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.384242 | orchestrator | Wednesday 17 September 2025 16:10:42 +0000 (0:00:00.645) 0:09:28.881 *** 2025-09-17 16:12:32.384246 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384251 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384256 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384261 | orchestrator | 2025-09-17 16:12:32.384268 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.384273 | orchestrator | Wednesday 17 September 2025 16:10:43 +0000 (0:00:00.430) 0:09:29.312 *** 2025-09-17 16:12:32.384278 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384283 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384287 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384292 | orchestrator | 2025-09-17 16:12:32.384297 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.384302 | orchestrator | Wednesday 17 September 2025 16:10:43 +0000 (0:00:00.425) 0:09:29.737 *** 2025-09-17 16:12:32.384306 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384311 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384316 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384320 | orchestrator | 2025-09-17 16:12:32.384325 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.384330 | orchestrator | Wednesday 17 September 2025 16:10:44 +0000 (0:00:00.426) 0:09:30.164 *** 2025-09-17 16:12:32.384334 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384339 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384344 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384348 | orchestrator | 2025-09-17 16:12:32.384353 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.384358 | orchestrator | Wednesday 17 September 2025 16:10:44 +0000 (0:00:00.683) 0:09:30.847 *** 2025-09-17 16:12:32.384362 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384367 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384372 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384376 | orchestrator | 2025-09-17 16:12:32.384381 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.384386 | orchestrator | Wednesday 17 September 2025 16:10:45 +0000 (0:00:00.347) 0:09:31.195 *** 2025-09-17 16:12:32.384390 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384395 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384400 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384404 | orchestrator | 2025-09-17 16:12:32.384409 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.384414 | orchestrator | Wednesday 17 September 2025 16:10:45 +0000 (0:00:00.290) 0:09:31.485 *** 2025-09-17 16:12:32.384424 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384428 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384433 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384438 | orchestrator | 2025-09-17 16:12:32.384443 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.384447 | orchestrator | Wednesday 17 September 2025 16:10:45 +0000 (0:00:00.282) 0:09:31.767 *** 2025-09-17 16:12:32.384452 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384460 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384464 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384469 | orchestrator | 2025-09-17 16:12:32.384474 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.384479 | orchestrator | Wednesday 17 September 2025 16:10:46 +0000 (0:00:00.466) 0:09:32.234 *** 2025-09-17 16:12:32.384483 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.384488 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.384493 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.384497 | orchestrator | 2025-09-17 16:12:32.384502 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-17 16:12:32.384507 | orchestrator | Wednesday 17 September 2025 16:10:46 +0000 (0:00:00.461) 0:09:32.695 *** 2025-09-17 16:12:32.384512 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384516 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384521 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-17 16:12:32.384526 | orchestrator | 2025-09-17 16:12:32.384531 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-17 16:12:32.384535 | orchestrator | Wednesday 17 September 2025 16:10:47 +0000 (0:00:00.549) 0:09:33.245 *** 2025-09-17 16:12:32.384540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.384545 | orchestrator | 2025-09-17 16:12:32.384549 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-17 16:12:32.384554 | orchestrator | Wednesday 17 September 2025 16:10:49 +0000 (0:00:02.308) 0:09:35.553 *** 2025-09-17 16:12:32.384559 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-17 16:12:32.384566 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384571 | orchestrator | 2025-09-17 16:12:32.384575 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-17 16:12:32.384580 | orchestrator | Wednesday 17 September 2025 16:10:49 +0000 (0:00:00.218) 0:09:35.772 *** 2025-09-17 16:12:32.384586 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:12:32.384596 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:12:32.384601 | orchestrator | 2025-09-17 16:12:32.384606 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-17 16:12:32.384610 | orchestrator | Wednesday 17 September 2025 16:10:57 +0000 (0:00:07.895) 0:09:43.668 *** 2025-09-17 16:12:32.384615 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:12:32.384620 | orchestrator | 2025-09-17 16:12:32.384625 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-17 16:12:32.384632 | orchestrator | Wednesday 17 September 2025 16:11:01 +0000 (0:00:03.641) 0:09:47.309 *** 2025-09-17 16:12:32.384637 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.384644 | orchestrator | 2025-09-17 16:12:32.384649 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-17 16:12:32.384654 | orchestrator | Wednesday 17 September 2025 16:11:01 +0000 (0:00:00.490) 0:09:47.799 *** 2025-09-17 16:12:32.384659 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 16:12:32.384663 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 16:12:32.384668 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-17 16:12:32.384673 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-17 16:12:32.384677 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-17 16:12:32.384682 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-17 16:12:32.384687 | orchestrator | 2025-09-17 16:12:32.384691 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-17 16:12:32.384696 | orchestrator | Wednesday 17 September 2025 16:11:03 +0000 (0:00:01.292) 0:09:49.092 *** 2025-09-17 16:12:32.384701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.384706 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.384710 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.384715 | orchestrator | 2025-09-17 16:12:32.384720 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-17 16:12:32.384725 | orchestrator | Wednesday 17 September 2025 16:11:05 +0000 (0:00:02.077) 0:09:51.170 *** 2025-09-17 16:12:32.384729 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:12:32.384734 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.384739 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384743 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:12:32.384748 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 16:12:32.384753 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384757 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:12:32.384765 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 16:12:32.384770 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384775 | orchestrator | 2025-09-17 16:12:32.384780 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-17 16:12:32.384785 | orchestrator | Wednesday 17 September 2025 16:11:06 +0000 (0:00:01.164) 0:09:52.335 *** 2025-09-17 16:12:32.384789 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384794 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384799 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384803 | orchestrator | 2025-09-17 16:12:32.384808 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-17 16:12:32.384813 | orchestrator | Wednesday 17 September 2025 16:11:08 +0000 (0:00:02.641) 0:09:54.976 *** 2025-09-17 16:12:32.384817 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.384822 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.384827 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.384831 | orchestrator | 2025-09-17 16:12:32.384836 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-17 16:12:32.384841 | orchestrator | Wednesday 17 September 2025 16:11:09 +0000 (0:00:00.414) 0:09:55.391 *** 2025-09-17 16:12:32.384845 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.384850 | orchestrator | 2025-09-17 16:12:32.384855 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-17 16:12:32.384860 | orchestrator | Wednesday 17 September 2025 16:11:09 +0000 (0:00:00.463) 0:09:55.855 *** 2025-09-17 16:12:32.384864 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.384872 | orchestrator | 2025-09-17 16:12:32.384877 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-17 16:12:32.384882 | orchestrator | Wednesday 17 September 2025 16:11:10 +0000 (0:00:00.623) 0:09:56.478 *** 2025-09-17 16:12:32.384886 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384891 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384896 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384901 | orchestrator | 2025-09-17 16:12:32.384905 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-17 16:12:32.384910 | orchestrator | Wednesday 17 September 2025 16:11:11 +0000 (0:00:01.178) 0:09:57.657 *** 2025-09-17 16:12:32.384915 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384919 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384924 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384929 | orchestrator | 2025-09-17 16:12:32.384933 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-17 16:12:32.384938 | orchestrator | Wednesday 17 September 2025 16:11:12 +0000 (0:00:01.127) 0:09:58.784 *** 2025-09-17 16:12:32.384943 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384947 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384952 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384957 | orchestrator | 2025-09-17 16:12:32.384962 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-17 16:12:32.384966 | orchestrator | Wednesday 17 September 2025 16:11:14 +0000 (0:00:01.684) 0:10:00.468 *** 2025-09-17 16:12:32.384971 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.384976 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.384981 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.384985 | orchestrator | 2025-09-17 16:12:32.384990 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-17 16:12:32.384997 | orchestrator | Wednesday 17 September 2025 16:11:16 +0000 (0:00:02.149) 0:10:02.618 *** 2025-09-17 16:12:32.385002 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385007 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385012 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385016 | orchestrator | 2025-09-17 16:12:32.385021 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.385026 | orchestrator | Wednesday 17 September 2025 16:11:17 +0000 (0:00:01.122) 0:10:03.741 *** 2025-09-17 16:12:32.385031 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.385035 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.385040 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.385045 | orchestrator | 2025-09-17 16:12:32.385049 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-17 16:12:32.385054 | orchestrator | Wednesday 17 September 2025 16:11:18 +0000 (0:00:00.615) 0:10:04.357 *** 2025-09-17 16:12:32.385059 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.385063 | orchestrator | 2025-09-17 16:12:32.385068 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-17 16:12:32.385073 | orchestrator | Wednesday 17 September 2025 16:11:18 +0000 (0:00:00.694) 0:10:05.051 *** 2025-09-17 16:12:32.385077 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385082 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385087 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385092 | orchestrator | 2025-09-17 16:12:32.385096 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-17 16:12:32.385101 | orchestrator | Wednesday 17 September 2025 16:11:19 +0000 (0:00:00.289) 0:10:05.341 *** 2025-09-17 16:12:32.385106 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.385110 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.385115 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.385120 | orchestrator | 2025-09-17 16:12:32.385125 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-17 16:12:32.385133 | orchestrator | Wednesday 17 September 2025 16:11:20 +0000 (0:00:01.194) 0:10:06.535 *** 2025-09-17 16:12:32.385137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.385142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.385147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.385151 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385156 | orchestrator | 2025-09-17 16:12:32.385161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-17 16:12:32.385168 | orchestrator | Wednesday 17 September 2025 16:11:21 +0000 (0:00:00.912) 0:10:07.447 *** 2025-09-17 16:12:32.385173 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385178 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385182 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385187 | orchestrator | 2025-09-17 16:12:32.385192 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-17 16:12:32.385210 | orchestrator | 2025-09-17 16:12:32.385215 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-17 16:12:32.385219 | orchestrator | Wednesday 17 September 2025 16:11:21 +0000 (0:00:00.496) 0:10:07.943 *** 2025-09-17 16:12:32.385224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.385229 | orchestrator | 2025-09-17 16:12:32.385234 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-17 16:12:32.385239 | orchestrator | Wednesday 17 September 2025 16:11:22 +0000 (0:00:00.646) 0:10:08.590 *** 2025-09-17 16:12:32.385243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.385248 | orchestrator | 2025-09-17 16:12:32.385253 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-17 16:12:32.385258 | orchestrator | Wednesday 17 September 2025 16:11:22 +0000 (0:00:00.451) 0:10:09.042 *** 2025-09-17 16:12:32.385262 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385267 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385272 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385276 | orchestrator | 2025-09-17 16:12:32.385281 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-17 16:12:32.385286 | orchestrator | Wednesday 17 September 2025 16:11:23 +0000 (0:00:00.244) 0:10:09.287 *** 2025-09-17 16:12:32.385291 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385295 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385300 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385305 | orchestrator | 2025-09-17 16:12:32.385309 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-17 16:12:32.385314 | orchestrator | Wednesday 17 September 2025 16:11:24 +0000 (0:00:00.820) 0:10:10.107 *** 2025-09-17 16:12:32.385319 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385324 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385328 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385333 | orchestrator | 2025-09-17 16:12:32.385338 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-17 16:12:32.385342 | orchestrator | Wednesday 17 September 2025 16:11:24 +0000 (0:00:00.695) 0:10:10.802 *** 2025-09-17 16:12:32.385347 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385352 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385357 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385361 | orchestrator | 2025-09-17 16:12:32.385366 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-17 16:12:32.385371 | orchestrator | Wednesday 17 September 2025 16:11:25 +0000 (0:00:00.702) 0:10:11.505 *** 2025-09-17 16:12:32.385376 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385380 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385385 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385390 | orchestrator | 2025-09-17 16:12:32.385398 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-17 16:12:32.385403 | orchestrator | Wednesday 17 September 2025 16:11:25 +0000 (0:00:00.240) 0:10:11.745 *** 2025-09-17 16:12:32.385410 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385415 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385420 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385424 | orchestrator | 2025-09-17 16:12:32.385429 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-17 16:12:32.385434 | orchestrator | Wednesday 17 September 2025 16:11:26 +0000 (0:00:00.412) 0:10:12.158 *** 2025-09-17 16:12:32.385439 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385443 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385448 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385453 | orchestrator | 2025-09-17 16:12:32.385458 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-17 16:12:32.385462 | orchestrator | Wednesday 17 September 2025 16:11:26 +0000 (0:00:00.282) 0:10:12.441 *** 2025-09-17 16:12:32.385467 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385472 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385476 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385481 | orchestrator | 2025-09-17 16:12:32.385486 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-17 16:12:32.385491 | orchestrator | Wednesday 17 September 2025 16:11:27 +0000 (0:00:00.647) 0:10:13.089 *** 2025-09-17 16:12:32.385495 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385500 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385505 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385509 | orchestrator | 2025-09-17 16:12:32.385514 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-17 16:12:32.385519 | orchestrator | Wednesday 17 September 2025 16:11:27 +0000 (0:00:00.662) 0:10:13.751 *** 2025-09-17 16:12:32.385524 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385528 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385533 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385538 | orchestrator | 2025-09-17 16:12:32.385542 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-17 16:12:32.385547 | orchestrator | Wednesday 17 September 2025 16:11:28 +0000 (0:00:00.411) 0:10:14.163 *** 2025-09-17 16:12:32.385552 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385557 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385561 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385566 | orchestrator | 2025-09-17 16:12:32.385571 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-17 16:12:32.385575 | orchestrator | Wednesday 17 September 2025 16:11:28 +0000 (0:00:00.286) 0:10:14.450 *** 2025-09-17 16:12:32.385580 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385585 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385590 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385594 | orchestrator | 2025-09-17 16:12:32.385601 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-17 16:12:32.385606 | orchestrator | Wednesday 17 September 2025 16:11:28 +0000 (0:00:00.296) 0:10:14.747 *** 2025-09-17 16:12:32.385611 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385616 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385621 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385625 | orchestrator | 2025-09-17 16:12:32.385630 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-17 16:12:32.385635 | orchestrator | Wednesday 17 September 2025 16:11:28 +0000 (0:00:00.325) 0:10:15.072 *** 2025-09-17 16:12:32.385640 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385644 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385649 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385654 | orchestrator | 2025-09-17 16:12:32.385658 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-17 16:12:32.385666 | orchestrator | Wednesday 17 September 2025 16:11:29 +0000 (0:00:00.449) 0:10:15.522 *** 2025-09-17 16:12:32.385671 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385676 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385681 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385685 | orchestrator | 2025-09-17 16:12:32.385690 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-17 16:12:32.385695 | orchestrator | Wednesday 17 September 2025 16:11:29 +0000 (0:00:00.266) 0:10:15.788 *** 2025-09-17 16:12:32.385699 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385704 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385709 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385713 | orchestrator | 2025-09-17 16:12:32.385718 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-17 16:12:32.385723 | orchestrator | Wednesday 17 September 2025 16:11:29 +0000 (0:00:00.243) 0:10:16.032 *** 2025-09-17 16:12:32.385727 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385732 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385737 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385741 | orchestrator | 2025-09-17 16:12:32.385746 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-17 16:12:32.385751 | orchestrator | Wednesday 17 September 2025 16:11:30 +0000 (0:00:00.283) 0:10:16.316 *** 2025-09-17 16:12:32.385756 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385760 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385765 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385770 | orchestrator | 2025-09-17 16:12:32.385774 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-17 16:12:32.385779 | orchestrator | Wednesday 17 September 2025 16:11:30 +0000 (0:00:00.537) 0:10:16.853 *** 2025-09-17 16:12:32.385784 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.385788 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.385793 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.385798 | orchestrator | 2025-09-17 16:12:32.385802 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-17 16:12:32.385807 | orchestrator | Wednesday 17 September 2025 16:11:31 +0000 (0:00:00.536) 0:10:17.390 *** 2025-09-17 16:12:32.385812 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.385817 | orchestrator | 2025-09-17 16:12:32.385821 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-17 16:12:32.385826 | orchestrator | Wednesday 17 September 2025 16:11:32 +0000 (0:00:00.725) 0:10:18.116 *** 2025-09-17 16:12:32.385831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.385838 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.385843 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.385848 | orchestrator | 2025-09-17 16:12:32.385852 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-17 16:12:32.385857 | orchestrator | Wednesday 17 September 2025 16:11:34 +0000 (0:00:02.329) 0:10:20.446 *** 2025-09-17 16:12:32.385862 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:12:32.385867 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-17 16:12:32.385871 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.385876 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:12:32.385881 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-17 16:12:32.385886 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.385890 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:12:32.385895 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-17 16:12:32.385900 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.385904 | orchestrator | 2025-09-17 16:12:32.385909 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-17 16:12:32.385917 | orchestrator | Wednesday 17 September 2025 16:11:35 +0000 (0:00:01.240) 0:10:21.686 *** 2025-09-17 16:12:32.385922 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.385926 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.385931 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.385936 | orchestrator | 2025-09-17 16:12:32.385940 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-17 16:12:32.385945 | orchestrator | Wednesday 17 September 2025 16:11:35 +0000 (0:00:00.291) 0:10:21.977 *** 2025-09-17 16:12:32.385950 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.385955 | orchestrator | 2025-09-17 16:12:32.385960 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-17 16:12:32.385964 | orchestrator | Wednesday 17 September 2025 16:11:36 +0000 (0:00:00.593) 0:10:22.571 *** 2025-09-17 16:12:32.385969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.385976 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.385981 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.385986 | orchestrator | 2025-09-17 16:12:32.385991 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-17 16:12:32.385996 | orchestrator | Wednesday 17 September 2025 16:11:37 +0000 (0:00:01.072) 0:10:23.643 *** 2025-09-17 16:12:32.386000 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386005 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 16:12:32.386010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386027 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 16:12:32.386032 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386037 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-17 16:12:32.386042 | orchestrator | 2025-09-17 16:12:32.386046 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-17 16:12:32.386051 | orchestrator | Wednesday 17 September 2025 16:11:41 +0000 (0:00:04.281) 0:10:27.924 *** 2025-09-17 16:12:32.386056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386061 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386065 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.386070 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.386075 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:12:32.386079 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:12:32.386084 | orchestrator | 2025-09-17 16:12:32.386089 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-17 16:12:32.386094 | orchestrator | Wednesday 17 September 2025 16:11:44 +0000 (0:00:02.231) 0:10:30.156 *** 2025-09-17 16:12:32.386099 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:12:32.386103 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.386108 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:12:32.386113 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.386118 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:12:32.386126 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.386130 | orchestrator | 2025-09-17 16:12:32.386135 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-17 16:12:32.386140 | orchestrator | Wednesday 17 September 2025 16:11:45 +0000 (0:00:01.368) 0:10:31.524 *** 2025-09-17 16:12:32.386145 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-17 16:12:32.386149 | orchestrator | 2025-09-17 16:12:32.386157 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-17 16:12:32.386162 | orchestrator | Wednesday 17 September 2025 16:11:45 +0000 (0:00:00.237) 0:10:31.762 *** 2025-09-17 16:12:32.386167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386191 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386207 | orchestrator | 2025-09-17 16:12:32.386212 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-17 16:12:32.386216 | orchestrator | Wednesday 17 September 2025 16:11:46 +0000 (0:00:00.587) 0:10:32.350 *** 2025-09-17 16:12:32.386221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-17 16:12:32.386247 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386252 | orchestrator | 2025-09-17 16:12:32.386257 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-17 16:12:32.386262 | orchestrator | Wednesday 17 September 2025 16:11:46 +0000 (0:00:00.561) 0:10:32.912 *** 2025-09-17 16:12:32.386267 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 16:12:32.386272 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 16:12:32.386277 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 16:12:32.386281 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 16:12:32.386286 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-17 16:12:32.386291 | orchestrator | 2025-09-17 16:12:32.386296 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-17 16:12:32.386303 | orchestrator | Wednesday 17 September 2025 16:12:17 +0000 (0:00:31.120) 0:11:04.032 *** 2025-09-17 16:12:32.386308 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386313 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.386318 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.386322 | orchestrator | 2025-09-17 16:12:32.386327 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-17 16:12:32.386332 | orchestrator | Wednesday 17 September 2025 16:12:18 +0000 (0:00:00.337) 0:11:04.370 *** 2025-09-17 16:12:32.386337 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386341 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.386346 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.386351 | orchestrator | 2025-09-17 16:12:32.386355 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-17 16:12:32.386360 | orchestrator | Wednesday 17 September 2025 16:12:18 +0000 (0:00:00.306) 0:11:04.677 *** 2025-09-17 16:12:32.386365 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.386370 | orchestrator | 2025-09-17 16:12:32.386374 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-17 16:12:32.386379 | orchestrator | Wednesday 17 September 2025 16:12:19 +0000 (0:00:00.812) 0:11:05.489 *** 2025-09-17 16:12:32.386384 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.386388 | orchestrator | 2025-09-17 16:12:32.386393 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-17 16:12:32.386398 | orchestrator | Wednesday 17 September 2025 16:12:19 +0000 (0:00:00.505) 0:11:05.995 *** 2025-09-17 16:12:32.386403 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.386410 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.386415 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.386420 | orchestrator | 2025-09-17 16:12:32.386425 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-17 16:12:32.386430 | orchestrator | Wednesday 17 September 2025 16:12:21 +0000 (0:00:01.809) 0:11:07.805 *** 2025-09-17 16:12:32.386434 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.386439 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.386444 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.386448 | orchestrator | 2025-09-17 16:12:32.386463 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-17 16:12:32.386468 | orchestrator | Wednesday 17 September 2025 16:12:22 +0000 (0:00:01.202) 0:11:09.008 *** 2025-09-17 16:12:32.386472 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:12:32.386477 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:12:32.386489 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:12:32.386494 | orchestrator | 2025-09-17 16:12:32.386498 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-17 16:12:32.386503 | orchestrator | Wednesday 17 September 2025 16:12:25 +0000 (0:00:02.623) 0:11:11.632 *** 2025-09-17 16:12:32.386508 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.386513 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.386518 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-17 16:12:32.386523 | orchestrator | 2025-09-17 16:12:32.386527 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-17 16:12:32.386532 | orchestrator | Wednesday 17 September 2025 16:12:28 +0000 (0:00:02.544) 0:11:14.176 *** 2025-09-17 16:12:32.386537 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386542 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.386551 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.386555 | orchestrator | 2025-09-17 16:12:32.386560 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-17 16:12:32.386565 | orchestrator | Wednesday 17 September 2025 16:12:28 +0000 (0:00:00.347) 0:11:14.524 *** 2025-09-17 16:12:32.386572 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:12:32.386577 | orchestrator | 2025-09-17 16:12:32.386582 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-17 16:12:32.386586 | orchestrator | Wednesday 17 September 2025 16:12:29 +0000 (0:00:00.713) 0:11:15.238 *** 2025-09-17 16:12:32.386591 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.386596 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.386601 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.386605 | orchestrator | 2025-09-17 16:12:32.386610 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-17 16:12:32.386615 | orchestrator | Wednesday 17 September 2025 16:12:29 +0000 (0:00:00.310) 0:11:15.548 *** 2025-09-17 16:12:32.386620 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386624 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:12:32.386629 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:12:32.386634 | orchestrator | 2025-09-17 16:12:32.386638 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-17 16:12:32.386643 | orchestrator | Wednesday 17 September 2025 16:12:29 +0000 (0:00:00.319) 0:11:15.867 *** 2025-09-17 16:12:32.386648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:12:32.386653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:12:32.386657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:12:32.386662 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:12:32.386667 | orchestrator | 2025-09-17 16:12:32.386671 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-17 16:12:32.386676 | orchestrator | Wednesday 17 September 2025 16:12:30 +0000 (0:00:00.811) 0:11:16.679 *** 2025-09-17 16:12:32.386681 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:12:32.386686 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:12:32.386690 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:12:32.386695 | orchestrator | 2025-09-17 16:12:32.386700 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:12:32.386705 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-17 16:12:32.386710 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-17 16:12:32.386715 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-17 16:12:32.386720 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-17 16:12:32.386724 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-17 16:12:32.386729 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-17 16:12:32.386734 | orchestrator | 2025-09-17 16:12:32.386739 | orchestrator | 2025-09-17 16:12:32.386743 | orchestrator | 2025-09-17 16:12:32.386748 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:12:32.386756 | orchestrator | Wednesday 17 September 2025 16:12:30 +0000 (0:00:00.246) 0:11:16.925 *** 2025-09-17 16:12:32.386761 | orchestrator | =============================================================================== 2025-09-17 16:12:32.386769 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 80.05s 2025-09-17 16:12:32.386774 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.70s 2025-09-17 16:12:32.386779 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.29s 2025-09-17 16:12:32.386784 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.12s 2025-09-17 16:12:32.386788 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.11s 2025-09-17 16:12:32.386793 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.96s 2025-09-17 16:12:32.386798 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.68s 2025-09-17 16:12:32.386803 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.18s 2025-09-17 16:12:32.386807 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.78s 2025-09-17 16:12:32.386812 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.90s 2025-09-17 16:12:32.386817 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.10s 2025-09-17 16:12:32.386821 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.38s 2025-09-17 16:12:32.386826 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.81s 2025-09-17 16:12:32.386831 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.28s 2025-09-17 16:12:32.386836 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.22s 2025-09-17 16:12:32.386840 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.15s 2025-09-17 16:12:32.386845 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.64s 2025-09-17 16:12:32.386850 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.56s 2025-09-17 16:12:32.386854 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.50s 2025-09-17 16:12:32.386859 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.32s 2025-09-17 16:12:32.386864 | orchestrator | 2025-09-17 16:12:32 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:32.386869 | orchestrator | 2025-09-17 16:12:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:35.410382 | orchestrator | 2025-09-17 16:12:35 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:35.412019 | orchestrator | 2025-09-17 16:12:35 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:35.413661 | orchestrator | 2025-09-17 16:12:35 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:35.413732 | orchestrator | 2025-09-17 16:12:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:38.455006 | orchestrator | 2025-09-17 16:12:38 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:38.456778 | orchestrator | 2025-09-17 16:12:38 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:38.459180 | orchestrator | 2025-09-17 16:12:38 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:38.459253 | orchestrator | 2025-09-17 16:12:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:41.504086 | orchestrator | 2025-09-17 16:12:41 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:41.505643 | orchestrator | 2025-09-17 16:12:41 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:41.507507 | orchestrator | 2025-09-17 16:12:41 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:41.507698 | orchestrator | 2025-09-17 16:12:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:44.557440 | orchestrator | 2025-09-17 16:12:44 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:44.557565 | orchestrator | 2025-09-17 16:12:44 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:44.558492 | orchestrator | 2025-09-17 16:12:44 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:44.558542 | orchestrator | 2025-09-17 16:12:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:47.599377 | orchestrator | 2025-09-17 16:12:47 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:47.601289 | orchestrator | 2025-09-17 16:12:47 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:47.602762 | orchestrator | 2025-09-17 16:12:47 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:47.602798 | orchestrator | 2025-09-17 16:12:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:50.640734 | orchestrator | 2025-09-17 16:12:50 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:50.643032 | orchestrator | 2025-09-17 16:12:50 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:50.645151 | orchestrator | 2025-09-17 16:12:50 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:50.646810 | orchestrator | 2025-09-17 16:12:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:53.689458 | orchestrator | 2025-09-17 16:12:53 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:53.691055 | orchestrator | 2025-09-17 16:12:53 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:53.693837 | orchestrator | 2025-09-17 16:12:53 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:53.693934 | orchestrator | 2025-09-17 16:12:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:56.735655 | orchestrator | 2025-09-17 16:12:56 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:56.736907 | orchestrator | 2025-09-17 16:12:56 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:56.738808 | orchestrator | 2025-09-17 16:12:56 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state STARTED 2025-09-17 16:12:56.738841 | orchestrator | 2025-09-17 16:12:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:12:59.779665 | orchestrator | 2025-09-17 16:12:59 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:12:59.782907 | orchestrator | 2025-09-17 16:12:59 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:12:59.782945 | orchestrator | 2025-09-17 16:12:59 | INFO  | Task 2a0c9b93-af5c-4655-95c4-85886a89dc18 is in state SUCCESS 2025-09-17 16:12:59.784437 | orchestrator | 2025-09-17 16:12:59.784469 | orchestrator | 2025-09-17 16:12:59.784481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:12:59.784493 | orchestrator | 2025-09-17 16:12:59.784504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:12:59.784515 | orchestrator | Wednesday 17 September 2025 16:10:19 +0000 (0:00:00.279) 0:00:00.279 *** 2025-09-17 16:12:59.784526 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:59.784538 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:12:59.784548 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:12:59.784559 | orchestrator | 2025-09-17 16:12:59.784571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:12:59.784607 | orchestrator | Wednesday 17 September 2025 16:10:19 +0000 (0:00:00.310) 0:00:00.589 *** 2025-09-17 16:12:59.784620 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-17 16:12:59.784631 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-17 16:12:59.784641 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-17 16:12:59.784652 | orchestrator | 2025-09-17 16:12:59.784662 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-17 16:12:59.784673 | orchestrator | 2025-09-17 16:12:59.784683 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 16:12:59.784699 | orchestrator | Wednesday 17 September 2025 16:10:19 +0000 (0:00:00.440) 0:00:01.030 *** 2025-09-17 16:12:59.784710 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:59.784721 | orchestrator | 2025-09-17 16:12:59.784731 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-17 16:12:59.784742 | orchestrator | Wednesday 17 September 2025 16:10:20 +0000 (0:00:00.525) 0:00:01.556 *** 2025-09-17 16:12:59.784752 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 16:12:59.784763 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 16:12:59.784773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-17 16:12:59.784784 | orchestrator | 2025-09-17 16:12:59.784794 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-17 16:12:59.784805 | orchestrator | Wednesday 17 September 2025 16:10:20 +0000 (0:00:00.706) 0:00:02.263 *** 2025-09-17 16:12:59.784819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.784835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.784869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.784893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.784907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.784920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.784932 | orchestrator | 2025-09-17 16:12:59.784943 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 16:12:59.784954 | orchestrator | Wednesday 17 September 2025 16:10:22 +0000 (0:00:01.878) 0:00:04.141 *** 2025-09-17 16:12:59.784965 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:59.784983 | orchestrator | 2025-09-17 16:12:59.784994 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-17 16:12:59.785005 | orchestrator | Wednesday 17 September 2025 16:10:23 +0000 (0:00:00.578) 0:00:04.719 *** 2025-09-17 16:12:59.785030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785134 | orchestrator | 2025-09-17 16:12:59.785147 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-17 16:12:59.785159 | orchestrator | Wednesday 17 September 2025 16:10:26 +0000 (0:00:02.863) 0:00:07.582 *** 2025-09-17 16:12:59.785172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785266 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:59.785294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785321 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:59.785334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785366 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:59.785379 | orchestrator | 2025-09-17 16:12:59.785390 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-17 16:12:59.785401 | orchestrator | Wednesday 17 September 2025 16:10:27 +0000 (0:00:01.078) 0:00:08.661 *** 2025-09-17 16:12:59.785431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785455 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:59.785466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785497 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:59.785518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-17 16:12:59.785530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-17 16:12:59.785541 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:59.785552 | orchestrator | 2025-09-17 16:12:59.785563 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-17 16:12:59.785573 | orchestrator | Wednesday 17 September 2025 16:10:28 +0000 (0:00:00.931) 0:00:09.592 *** 2025-09-17 16:12:59.785584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785681 | orchestrator | 2025-09-17 16:12:59.785692 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-17 16:12:59.785703 | orchestrator | Wednesday 17 September 2025 16:10:30 +0000 (0:00:02.385) 0:00:11.977 *** 2025-09-17 16:12:59.785714 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:59.785725 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.785736 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:59.785746 | orchestrator | 2025-09-17 16:12:59.785757 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-17 16:12:59.785767 | orchestrator | Wednesday 17 September 2025 16:10:34 +0000 (0:00:03.345) 0:00:15.323 *** 2025-09-17 16:12:59.785778 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.785788 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:59.785799 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:59.785809 | orchestrator | 2025-09-17 16:12:59.785820 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-17 16:12:59.785831 | orchestrator | Wednesday 17 September 2025 16:10:35 +0000 (0:00:01.619) 0:00:16.942 *** 2025-09-17 16:12:59.785854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-17 16:12:59.785901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-17 16:12:59.785948 | orchestrator | 2025-09-17 16:12:59.785959 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 16:12:59.785970 | orchestrator | Wednesday 17 September 2025 16:10:37 +0000 (0:00:02.111) 0:00:19.054 *** 2025-09-17 16:12:59.785981 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:59.785991 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:12:59.786002 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:12:59.786012 | orchestrator | 2025-09-17 16:12:59.786080 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 16:12:59.786098 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.286) 0:00:19.340 *** 2025-09-17 16:12:59.786108 | orchestrator | 2025-09-17 16:12:59.786119 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 16:12:59.786129 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.058) 0:00:19.399 *** 2025-09-17 16:12:59.786140 | orchestrator | 2025-09-17 16:12:59.786150 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-17 16:12:59.786161 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.058) 0:00:19.457 *** 2025-09-17 16:12:59.786171 | orchestrator | 2025-09-17 16:12:59.786182 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-17 16:12:59.786193 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.185) 0:00:19.643 *** 2025-09-17 16:12:59.786222 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:59.786233 | orchestrator | 2025-09-17 16:12:59.786244 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-17 16:12:59.786255 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.181) 0:00:19.825 *** 2025-09-17 16:12:59.786265 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:12:59.786276 | orchestrator | 2025-09-17 16:12:59.786286 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-17 16:12:59.786297 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:00.194) 0:00:20.019 *** 2025-09-17 16:12:59.786307 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.786318 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:59.786328 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:59.786339 | orchestrator | 2025-09-17 16:12:59.786350 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-17 16:12:59.786360 | orchestrator | Wednesday 17 September 2025 16:11:34 +0000 (0:00:55.635) 0:01:15.655 *** 2025-09-17 16:12:59.786371 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.786381 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:12:59.786392 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:12:59.786402 | orchestrator | 2025-09-17 16:12:59.786413 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-17 16:12:59.786423 | orchestrator | Wednesday 17 September 2025 16:12:46 +0000 (0:01:12.128) 0:02:27.784 *** 2025-09-17 16:12:59.786434 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:12:59.786445 | orchestrator | 2025-09-17 16:12:59.786455 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-17 16:12:59.786466 | orchestrator | Wednesday 17 September 2025 16:12:47 +0000 (0:00:00.684) 0:02:28.469 *** 2025-09-17 16:12:59.786476 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:59.786487 | orchestrator | 2025-09-17 16:12:59.786498 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-17 16:12:59.786508 | orchestrator | Wednesday 17 September 2025 16:12:49 +0000 (0:00:02.421) 0:02:30.890 *** 2025-09-17 16:12:59.786519 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:12:59.786529 | orchestrator | 2025-09-17 16:12:59.786540 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-17 16:12:59.786551 | orchestrator | Wednesday 17 September 2025 16:12:51 +0000 (0:00:02.341) 0:02:33.231 *** 2025-09-17 16:12:59.786562 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.786572 | orchestrator | 2025-09-17 16:12:59.786588 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-17 16:12:59.786599 | orchestrator | Wednesday 17 September 2025 16:12:54 +0000 (0:00:03.035) 0:02:36.267 *** 2025-09-17 16:12:59.786610 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:12:59.786620 | orchestrator | 2025-09-17 16:12:59.786637 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:12:59.786650 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:12:59.786669 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:12:59.786679 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:12:59.786690 | orchestrator | 2025-09-17 16:12:59.786700 | orchestrator | 2025-09-17 16:12:59.786711 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:12:59.786722 | orchestrator | Wednesday 17 September 2025 16:12:57 +0000 (0:00:02.799) 0:02:39.066 *** 2025-09-17 16:12:59.786732 | orchestrator | =============================================================================== 2025-09-17 16:12:59.786743 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 72.13s 2025-09-17 16:12:59.786753 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.64s 2025-09-17 16:12:59.786764 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.35s 2025-09-17 16:12:59.786774 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.04s 2025-09-17 16:12:59.786785 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.86s 2025-09-17 16:12:59.786795 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.80s 2025-09-17 16:12:59.786806 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.42s 2025-09-17 16:12:59.786816 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2025-09-17 16:12:59.786826 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.34s 2025-09-17 16:12:59.786837 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-09-17 16:12:59.786847 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.88s 2025-09-17 16:12:59.786858 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.62s 2025-09-17 16:12:59.786868 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.08s 2025-09-17 16:12:59.786879 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.93s 2025-09-17 16:12:59.786890 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2025-09-17 16:12:59.786900 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-09-17 16:12:59.786910 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-09-17 16:12:59.786921 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-17 16:12:59.786931 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-17 16:12:59.786942 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-17 16:12:59.786953 | orchestrator | 2025-09-17 16:12:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:02.832864 | orchestrator | 2025-09-17 16:13:02 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:02.836032 | orchestrator | 2025-09-17 16:13:02 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:02.836386 | orchestrator | 2025-09-17 16:13:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:05.879096 | orchestrator | 2025-09-17 16:13:05 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:05.880432 | orchestrator | 2025-09-17 16:13:05 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:05.880540 | orchestrator | 2025-09-17 16:13:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:08.927031 | orchestrator | 2025-09-17 16:13:08 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:08.929075 | orchestrator | 2025-09-17 16:13:08 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:08.929333 | orchestrator | 2025-09-17 16:13:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:11.977816 | orchestrator | 2025-09-17 16:13:11 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:11.979541 | orchestrator | 2025-09-17 16:13:11 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:11.979587 | orchestrator | 2025-09-17 16:13:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:15.023255 | orchestrator | 2025-09-17 16:13:15 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:15.024091 | orchestrator | 2025-09-17 16:13:15 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:15.024150 | orchestrator | 2025-09-17 16:13:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:18.063547 | orchestrator | 2025-09-17 16:13:18 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:18.065349 | orchestrator | 2025-09-17 16:13:18 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:18.065380 | orchestrator | 2025-09-17 16:13:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:21.105707 | orchestrator | 2025-09-17 16:13:21 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:21.109536 | orchestrator | 2025-09-17 16:13:21 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:21.109586 | orchestrator | 2025-09-17 16:13:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:24.148665 | orchestrator | 2025-09-17 16:13:24 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state STARTED 2025-09-17 16:13:24.149778 | orchestrator | 2025-09-17 16:13:24 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:24.149821 | orchestrator | 2025-09-17 16:13:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:27.200544 | orchestrator | 2025-09-17 16:13:27.200652 | orchestrator | 2025-09-17 16:13:27 | INFO  | Task feb28740-5abd-4b3f-84e3-6c3e9d03c34e is in state SUCCESS 2025-09-17 16:13:27.202348 | orchestrator | 2025-09-17 16:13:27.202395 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-17 16:13:27.202409 | orchestrator | 2025-09-17 16:13:27.202420 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-17 16:13:27.202432 | orchestrator | Wednesday 17 September 2025 16:10:18 +0000 (0:00:00.100) 0:00:00.100 *** 2025-09-17 16:13:27.202443 | orchestrator | ok: [localhost] => { 2025-09-17 16:13:27.202456 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-17 16:13:27.202467 | orchestrator | } 2025-09-17 16:13:27.202478 | orchestrator | 2025-09-17 16:13:27.202490 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-17 16:13:27.202501 | orchestrator | Wednesday 17 September 2025 16:10:18 +0000 (0:00:00.058) 0:00:00.159 *** 2025-09-17 16:13:27.202512 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-17 16:13:27.202525 | orchestrator | ...ignoring 2025-09-17 16:13:27.202536 | orchestrator | 2025-09-17 16:13:27.202547 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-17 16:13:27.202558 | orchestrator | Wednesday 17 September 2025 16:10:21 +0000 (0:00:02.818) 0:00:02.978 *** 2025-09-17 16:13:27.202569 | orchestrator | skipping: [localhost] 2025-09-17 16:13:27.202600 | orchestrator | 2025-09-17 16:13:27.202612 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-17 16:13:27.202623 | orchestrator | Wednesday 17 September 2025 16:10:21 +0000 (0:00:00.057) 0:00:03.035 *** 2025-09-17 16:13:27.202633 | orchestrator | ok: [localhost] 2025-09-17 16:13:27.202644 | orchestrator | 2025-09-17 16:13:27.202655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:13:27.202666 | orchestrator | 2025-09-17 16:13:27.202677 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:13:27.202687 | orchestrator | Wednesday 17 September 2025 16:10:21 +0000 (0:00:00.145) 0:00:03.180 *** 2025-09-17 16:13:27.202698 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.202709 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.202720 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.202730 | orchestrator | 2025-09-17 16:13:27.202741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:13:27.202752 | orchestrator | Wednesday 17 September 2025 16:10:22 +0000 (0:00:00.306) 0:00:03.486 *** 2025-09-17 16:13:27.202763 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-17 16:13:27.202774 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-17 16:13:27.202785 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-17 16:13:27.202795 | orchestrator | 2025-09-17 16:13:27.202806 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-17 16:13:27.202817 | orchestrator | 2025-09-17 16:13:27.202828 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-17 16:13:27.202839 | orchestrator | Wednesday 17 September 2025 16:10:23 +0000 (0:00:00.738) 0:00:04.225 *** 2025-09-17 16:13:27.202850 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:13:27.202860 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-17 16:13:27.202871 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-17 16:13:27.202881 | orchestrator | 2025-09-17 16:13:27.202892 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 16:13:27.202903 | orchestrator | Wednesday 17 September 2025 16:10:23 +0000 (0:00:00.379) 0:00:04.605 *** 2025-09-17 16:13:27.202914 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:13:27.202926 | orchestrator | 2025-09-17 16:13:27.202937 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-17 16:13:27.202949 | orchestrator | Wednesday 17 September 2025 16:10:24 +0000 (0:00:00.704) 0:00:05.309 *** 2025-09-17 16:13:27.202990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203058 | orchestrator | 2025-09-17 16:13:27.203193 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-17 16:13:27.203246 | orchestrator | Wednesday 17 September 2025 16:10:27 +0000 (0:00:03.555) 0:00:08.865 *** 2025-09-17 16:13:27.203258 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.203271 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.203282 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.203295 | orchestrator | 2025-09-17 16:13:27.203307 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-17 16:13:27.203317 | orchestrator | Wednesday 17 September 2025 16:10:28 +0000 (0:00:00.687) 0:00:09.552 *** 2025-09-17 16:13:27.203328 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.203339 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.203432 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.203446 | orchestrator | 2025-09-17 16:13:27.203456 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-17 16:13:27.203467 | orchestrator | Wednesday 17 September 2025 16:10:29 +0000 (0:00:01.469) 0:00:11.022 *** 2025-09-17 16:13:27.203480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.203549 | orchestrator | 2025-09-17 16:13:27.203560 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-17 16:13:27.203571 | orchestrator | Wednesday 17 September 2025 16:10:33 +0000 (0:00:04.134) 0:00:15.156 *** 2025-09-17 16:13:27.203582 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.203592 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.203603 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.203613 | orchestrator | 2025-09-17 16:13:27.203624 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-17 16:13:27.203635 | orchestrator | Wednesday 17 September 2025 16:10:34 +0000 (0:00:01.034) 0:00:16.190 *** 2025-09-17 16:13:27.203646 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.203656 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:13:27.203667 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:13:27.203677 | orchestrator | 2025-09-17 16:13:27.203693 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 16:13:27.203704 | orchestrator | Wednesday 17 September 2025 16:10:38 +0000 (0:00:03.989) 0:00:20.179 *** 2025-09-17 16:13:27.203715 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:13:27.203725 | orchestrator | 2025-09-17 16:13:27.203736 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-17 16:13:27.203753 | orchestrator | Wednesday 17 September 2025 16:10:39 +0000 (0:00:01.022) 0:00:21.202 *** 2025-09-17 16:13:27.203773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203786 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.203798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203813 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.203832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203850 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.203861 | orchestrator | 2025-09-17 16:13:27.203872 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-17 16:13:27.203883 | orchestrator | Wednesday 17 September 2025 16:10:42 +0000 (0:00:03.010) 0:00:24.212 *** 2025-09-17 16:13:27.203894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203906 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.203930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203948 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.203960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.203972 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.203983 | orchestrator | 2025-09-17 16:13:27.203993 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-17 16:13:27.204004 | orchestrator | Wednesday 17 September 2025 16:10:46 +0000 (0:00:03.354) 0:00:27.567 *** 2025-09-17 16:13:27.204026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.204044 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.204057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.204071 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.204088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-17 16:13:27.204107 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.204119 | orchestrator | 2025-09-17 16:13:27.204131 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-17 16:13:27.204143 | orchestrator | Wednesday 17 September 2025 16:10:48 +0000 (0:00:02.512) 0:00:30.079 *** 2025-09-17 16:13:27.204165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.204184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.204233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-17 16:13:27.204248 | orchestrator | 2025-09-17 16:13:27.204260 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-17 16:13:27.204272 | orchestrator | Wednesday 17 September 2025 16:10:51 +0000 (0:00:03.113) 0:00:33.193 *** 2025-09-17 16:13:27.204284 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.204295 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:13:27.204307 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:13:27.204319 | orchestrator | 2025-09-17 16:13:27.204330 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-17 16:13:27.204343 | orchestrator | Wednesday 17 September 2025 16:10:53 +0000 (0:00:01.073) 0:00:34.266 *** 2025-09-17 16:13:27.204361 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.204373 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.204385 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.204397 | orchestrator | 2025-09-17 16:13:27.204410 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-17 16:13:27.204422 | orchestrator | Wednesday 17 September 2025 16:10:53 +0000 (0:00:00.284) 0:00:34.551 *** 2025-09-17 16:13:27.204433 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.204444 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.204454 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.204465 | orchestrator | 2025-09-17 16:13:27.204476 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-17 16:13:27.204486 | orchestrator | Wednesday 17 September 2025 16:10:53 +0000 (0:00:00.286) 0:00:34.837 *** 2025-09-17 16:13:27.204498 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-17 16:13:27.204509 | orchestrator | ...ignoring 2025-09-17 16:13:27.204520 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-17 16:13:27.204531 | orchestrator | ...ignoring 2025-09-17 16:13:27.204546 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-17 16:13:27.204557 | orchestrator | ...ignoring 2025-09-17 16:13:27.204568 | orchestrator | 2025-09-17 16:13:27.204579 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-17 16:13:27.204590 | orchestrator | Wednesday 17 September 2025 16:11:04 +0000 (0:00:10.765) 0:00:45.602 *** 2025-09-17 16:13:27.204600 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.204611 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.204622 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.204632 | orchestrator | 2025-09-17 16:13:27.204643 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-17 16:13:27.204654 | orchestrator | Wednesday 17 September 2025 16:11:04 +0000 (0:00:00.502) 0:00:46.105 *** 2025-09-17 16:13:27.204665 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.204676 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.204686 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.204697 | orchestrator | 2025-09-17 16:13:27.204708 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-17 16:13:27.204718 | orchestrator | Wednesday 17 September 2025 16:11:05 +0000 (0:00:00.440) 0:00:46.546 *** 2025-09-17 16:13:27.204729 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.204740 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.204751 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.204761 | orchestrator | 2025-09-17 16:13:27.204772 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-17 16:13:27.204783 | orchestrator | Wednesday 17 September 2025 16:11:05 +0000 (0:00:00.380) 0:00:46.927 *** 2025-09-17 16:13:27.204793 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.204804 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.204815 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.204826 | orchestrator | 2025-09-17 16:13:27.204837 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-17 16:13:27.204853 | orchestrator | Wednesday 17 September 2025 16:11:06 +0000 (0:00:00.338) 0:00:47.265 *** 2025-09-17 16:13:27.204864 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.204875 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.204885 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.204896 | orchestrator | 2025-09-17 16:13:27.204907 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-17 16:13:27.204918 | orchestrator | Wednesday 17 September 2025 16:11:06 +0000 (0:00:00.614) 0:00:47.880 *** 2025-09-17 16:13:27.204941 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.204952 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.204962 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.204973 | orchestrator | 2025-09-17 16:13:27.204984 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 16:13:27.204995 | orchestrator | Wednesday 17 September 2025 16:11:07 +0000 (0:00:00.362) 0:00:48.242 *** 2025-09-17 16:13:27.205005 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.205016 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.205027 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-17 16:13:27.205037 | orchestrator | 2025-09-17 16:13:27.205048 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-17 16:13:27.205059 | orchestrator | Wednesday 17 September 2025 16:11:07 +0000 (0:00:00.344) 0:00:48.587 *** 2025-09-17 16:13:27.205069 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.205080 | orchestrator | 2025-09-17 16:13:27.205091 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-17 16:13:27.205102 | orchestrator | Wednesday 17 September 2025 16:11:16 +0000 (0:00:09.591) 0:00:58.179 *** 2025-09-17 16:13:27.205113 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.205123 | orchestrator | 2025-09-17 16:13:27.205134 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 16:13:27.205145 | orchestrator | Wednesday 17 September 2025 16:11:17 +0000 (0:00:00.116) 0:00:58.295 *** 2025-09-17 16:13:27.205156 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.205167 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.205177 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.205188 | orchestrator | 2025-09-17 16:13:27.205199 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-17 16:13:27.205264 | orchestrator | Wednesday 17 September 2025 16:11:17 +0000 (0:00:00.819) 0:00:59.115 *** 2025-09-17 16:13:27.205275 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.205286 | orchestrator | 2025-09-17 16:13:27.205297 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-17 16:13:27.205308 | orchestrator | Wednesday 17 September 2025 16:11:24 +0000 (0:00:06.907) 0:01:06.022 *** 2025-09-17 16:13:27.205318 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.205327 | orchestrator | 2025-09-17 16:13:27.205336 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-17 16:13:27.205346 | orchestrator | Wednesday 17 September 2025 16:11:27 +0000 (0:00:02.562) 0:01:08.585 *** 2025-09-17 16:13:27.205355 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.205365 | orchestrator | 2025-09-17 16:13:27.205375 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-17 16:13:27.205385 | orchestrator | Wednesday 17 September 2025 16:11:29 +0000 (0:00:02.273) 0:01:10.858 *** 2025-09-17 16:13:27.205394 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.205404 | orchestrator | 2025-09-17 16:13:27.205413 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-17 16:13:27.205423 | orchestrator | Wednesday 17 September 2025 16:11:29 +0000 (0:00:00.090) 0:01:10.948 *** 2025-09-17 16:13:27.205433 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.205442 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.205452 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.205461 | orchestrator | 2025-09-17 16:13:27.205471 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-17 16:13:27.205480 | orchestrator | Wednesday 17 September 2025 16:11:30 +0000 (0:00:00.389) 0:01:11.338 *** 2025-09-17 16:13:27.205490 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.205504 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-17 16:13:27.205513 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:13:27.205523 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:13:27.205532 | orchestrator | 2025-09-17 16:13:27.205548 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-17 16:13:27.205558 | orchestrator | skipping: no hosts matched 2025-09-17 16:13:27.205567 | orchestrator | 2025-09-17 16:13:27.205577 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-17 16:13:27.205586 | orchestrator | 2025-09-17 16:13:27.205596 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 16:13:27.205605 | orchestrator | Wednesday 17 September 2025 16:11:30 +0000 (0:00:00.300) 0:01:11.638 *** 2025-09-17 16:13:27.205615 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:13:27.205624 | orchestrator | 2025-09-17 16:13:27.205634 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 16:13:27.205644 | orchestrator | Wednesday 17 September 2025 16:11:49 +0000 (0:00:18.654) 0:01:30.293 *** 2025-09-17 16:13:27.205653 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.205663 | orchestrator | 2025-09-17 16:13:27.205672 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 16:13:27.205682 | orchestrator | Wednesday 17 September 2025 16:12:09 +0000 (0:00:20.596) 0:01:50.890 *** 2025-09-17 16:13:27.205691 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.205701 | orchestrator | 2025-09-17 16:13:27.205710 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-17 16:13:27.205720 | orchestrator | 2025-09-17 16:13:27.205729 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 16:13:27.205739 | orchestrator | Wednesday 17 September 2025 16:12:12 +0000 (0:00:02.453) 0:01:53.343 *** 2025-09-17 16:13:27.205748 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:13:27.205758 | orchestrator | 2025-09-17 16:13:27.205767 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 16:13:27.205783 | orchestrator | Wednesday 17 September 2025 16:12:32 +0000 (0:00:19.960) 0:02:13.303 *** 2025-09-17 16:13:27.205793 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.205802 | orchestrator | 2025-09-17 16:13:27.205812 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 16:13:27.205821 | orchestrator | Wednesday 17 September 2025 16:12:52 +0000 (0:00:20.548) 0:02:33.852 *** 2025-09-17 16:13:27.205831 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.205840 | orchestrator | 2025-09-17 16:13:27.205850 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-17 16:13:27.205859 | orchestrator | 2025-09-17 16:13:27.205869 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-17 16:13:27.205878 | orchestrator | Wednesday 17 September 2025 16:12:55 +0000 (0:00:02.755) 0:02:36.607 *** 2025-09-17 16:13:27.205888 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.205897 | orchestrator | 2025-09-17 16:13:27.205907 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-17 16:13:27.205916 | orchestrator | Wednesday 17 September 2025 16:13:10 +0000 (0:00:15.098) 0:02:51.706 *** 2025-09-17 16:13:27.205926 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.205935 | orchestrator | 2025-09-17 16:13:27.205945 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-17 16:13:27.205954 | orchestrator | Wednesday 17 September 2025 16:13:11 +0000 (0:00:00.545) 0:02:52.252 *** 2025-09-17 16:13:27.205964 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.205973 | orchestrator | 2025-09-17 16:13:27.205983 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-17 16:13:27.205992 | orchestrator | 2025-09-17 16:13:27.206002 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-17 16:13:27.206040 | orchestrator | Wednesday 17 September 2025 16:13:13 +0000 (0:00:02.303) 0:02:54.555 *** 2025-09-17 16:13:27.206053 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:13:27.206063 | orchestrator | 2025-09-17 16:13:27.206073 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-17 16:13:27.206082 | orchestrator | Wednesday 17 September 2025 16:13:13 +0000 (0:00:00.500) 0:02:55.055 *** 2025-09-17 16:13:27.206097 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.206107 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.206116 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.206126 | orchestrator | 2025-09-17 16:13:27.206135 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-17 16:13:27.206145 | orchestrator | Wednesday 17 September 2025 16:13:16 +0000 (0:00:02.372) 0:02:57.428 *** 2025-09-17 16:13:27.206155 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.206164 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.206174 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.206183 | orchestrator | 2025-09-17 16:13:27.206193 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-17 16:13:27.206221 | orchestrator | Wednesday 17 September 2025 16:13:18 +0000 (0:00:02.233) 0:02:59.662 *** 2025-09-17 16:13:27.206231 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.206241 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.206250 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.206260 | orchestrator | 2025-09-17 16:13:27.206269 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-17 16:13:27.206279 | orchestrator | Wednesday 17 September 2025 16:13:20 +0000 (0:00:02.238) 0:03:01.901 *** 2025-09-17 16:13:27.206289 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.206298 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.206307 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:13:27.206317 | orchestrator | 2025-09-17 16:13:27.206327 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-17 16:13:27.206336 | orchestrator | Wednesday 17 September 2025 16:13:22 +0000 (0:00:02.181) 0:03:04.083 *** 2025-09-17 16:13:27.206346 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:13:27.206355 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:13:27.206365 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:13:27.206374 | orchestrator | 2025-09-17 16:13:27.206384 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-17 16:13:27.206398 | orchestrator | Wednesday 17 September 2025 16:13:25 +0000 (0:00:02.903) 0:03:06.987 *** 2025-09-17 16:13:27.206407 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:13:27.206417 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:13:27.206426 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:13:27.206436 | orchestrator | 2025-09-17 16:13:27.206445 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:13:27.206455 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-17 16:13:27.206465 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-17 16:13:27.206476 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-17 16:13:27.206486 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-17 16:13:27.206495 | orchestrator | 2025-09-17 16:13:27.206505 | orchestrator | 2025-09-17 16:13:27.206514 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:13:27.206524 | orchestrator | Wednesday 17 September 2025 16:13:25 +0000 (0:00:00.218) 0:03:07.205 *** 2025-09-17 16:13:27.206534 | orchestrator | =============================================================================== 2025-09-17 16:13:27.206543 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.15s 2025-09-17 16:13:27.206553 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.61s 2025-09-17 16:13:27.206569 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.10s 2025-09-17 16:13:27.206584 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.77s 2025-09-17 16:13:27.206593 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.59s 2025-09-17 16:13:27.206603 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.91s 2025-09-17 16:13:27.206612 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.21s 2025-09-17 16:13:27.206622 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.13s 2025-09-17 16:13:27.206631 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.99s 2025-09-17 16:13:27.206641 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.56s 2025-09-17 16:13:27.206650 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.35s 2025-09-17 16:13:27.206660 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.11s 2025-09-17 16:13:27.206669 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.01s 2025-09-17 16:13:27.206679 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2025-09-17 16:13:27.206688 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2025-09-17 16:13:27.206698 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.56s 2025-09-17 16:13:27.206707 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.51s 2025-09-17 16:13:27.206716 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2025-09-17 16:13:27.206726 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.30s 2025-09-17 16:13:27.206735 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.27s 2025-09-17 16:13:27.206745 | orchestrator | 2025-09-17 16:13:27 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:27.206755 | orchestrator | 2025-09-17 16:13:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:30.253624 | orchestrator | 2025-09-17 16:13:30 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:30.253800 | orchestrator | 2025-09-17 16:13:30 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:30.255006 | orchestrator | 2025-09-17 16:13:30 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:30.255045 | orchestrator | 2025-09-17 16:13:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:33.291997 | orchestrator | 2025-09-17 16:13:33 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:33.294495 | orchestrator | 2025-09-17 16:13:33 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:33.297059 | orchestrator | 2025-09-17 16:13:33 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:33.297567 | orchestrator | 2025-09-17 16:13:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:36.334680 | orchestrator | 2025-09-17 16:13:36 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:36.336108 | orchestrator | 2025-09-17 16:13:36 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:36.338124 | orchestrator | 2025-09-17 16:13:36 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:36.338150 | orchestrator | 2025-09-17 16:13:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:39.390140 | orchestrator | 2025-09-17 16:13:39 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:39.390386 | orchestrator | 2025-09-17 16:13:39 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:39.391028 | orchestrator | 2025-09-17 16:13:39 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:39.391250 | orchestrator | 2025-09-17 16:13:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:42.443309 | orchestrator | 2025-09-17 16:13:42 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:42.445333 | orchestrator | 2025-09-17 16:13:42 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:42.447581 | orchestrator | 2025-09-17 16:13:42 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:42.447824 | orchestrator | 2025-09-17 16:13:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:45.479745 | orchestrator | 2025-09-17 16:13:45 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:45.479936 | orchestrator | 2025-09-17 16:13:45 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:45.480554 | orchestrator | 2025-09-17 16:13:45 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:45.480631 | orchestrator | 2025-09-17 16:13:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:48.508348 | orchestrator | 2025-09-17 16:13:48 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:48.509000 | orchestrator | 2025-09-17 16:13:48 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:48.510065 | orchestrator | 2025-09-17 16:13:48 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:48.510092 | orchestrator | 2025-09-17 16:13:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:51.547714 | orchestrator | 2025-09-17 16:13:51 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:51.549129 | orchestrator | 2025-09-17 16:13:51 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:51.550140 | orchestrator | 2025-09-17 16:13:51 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:51.550167 | orchestrator | 2025-09-17 16:13:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:54.594710 | orchestrator | 2025-09-17 16:13:54 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:54.597157 | orchestrator | 2025-09-17 16:13:54 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:54.599336 | orchestrator | 2025-09-17 16:13:54 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:54.599710 | orchestrator | 2025-09-17 16:13:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:13:57.627969 | orchestrator | 2025-09-17 16:13:57 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:13:57.628058 | orchestrator | 2025-09-17 16:13:57 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:13:57.628432 | orchestrator | 2025-09-17 16:13:57 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:13:57.628816 | orchestrator | 2025-09-17 16:13:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:00.664171 | orchestrator | 2025-09-17 16:14:00 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:00.665926 | orchestrator | 2025-09-17 16:14:00 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:00.667063 | orchestrator | 2025-09-17 16:14:00 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:00.667089 | orchestrator | 2025-09-17 16:14:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:03.718501 | orchestrator | 2025-09-17 16:14:03 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:03.721028 | orchestrator | 2025-09-17 16:14:03 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:03.722975 | orchestrator | 2025-09-17 16:14:03 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:03.723188 | orchestrator | 2025-09-17 16:14:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:06.771045 | orchestrator | 2025-09-17 16:14:06 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:06.772725 | orchestrator | 2025-09-17 16:14:06 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:06.774385 | orchestrator | 2025-09-17 16:14:06 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:06.774617 | orchestrator | 2025-09-17 16:14:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:09.818649 | orchestrator | 2025-09-17 16:14:09 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:09.821237 | orchestrator | 2025-09-17 16:14:09 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:09.823583 | orchestrator | 2025-09-17 16:14:09 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:09.823604 | orchestrator | 2025-09-17 16:14:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:12.871376 | orchestrator | 2025-09-17 16:14:12 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:12.873899 | orchestrator | 2025-09-17 16:14:12 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:12.875594 | orchestrator | 2025-09-17 16:14:12 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:12.875665 | orchestrator | 2025-09-17 16:14:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:15.917897 | orchestrator | 2025-09-17 16:14:15 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:15.920347 | orchestrator | 2025-09-17 16:14:15 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:15.922110 | orchestrator | 2025-09-17 16:14:15 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:15.922210 | orchestrator | 2025-09-17 16:14:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:18.965415 | orchestrator | 2025-09-17 16:14:18 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:18.966461 | orchestrator | 2025-09-17 16:14:18 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:18.968379 | orchestrator | 2025-09-17 16:14:18 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:18.968403 | orchestrator | 2025-09-17 16:14:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:22.009745 | orchestrator | 2025-09-17 16:14:22 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:22.011874 | orchestrator | 2025-09-17 16:14:22 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:22.013878 | orchestrator | 2025-09-17 16:14:22 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:22.013941 | orchestrator | 2025-09-17 16:14:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:25.057056 | orchestrator | 2025-09-17 16:14:25 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:25.059167 | orchestrator | 2025-09-17 16:14:25 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:25.059770 | orchestrator | 2025-09-17 16:14:25 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:25.059796 | orchestrator | 2025-09-17 16:14:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:28.104646 | orchestrator | 2025-09-17 16:14:28 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:28.106178 | orchestrator | 2025-09-17 16:14:28 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:28.108273 | orchestrator | 2025-09-17 16:14:28 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:28.108304 | orchestrator | 2025-09-17 16:14:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:31.142536 | orchestrator | 2025-09-17 16:14:31 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:31.144743 | orchestrator | 2025-09-17 16:14:31 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:31.146609 | orchestrator | 2025-09-17 16:14:31 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:31.146636 | orchestrator | 2025-09-17 16:14:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:34.189628 | orchestrator | 2025-09-17 16:14:34 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:34.191799 | orchestrator | 2025-09-17 16:14:34 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:34.195393 | orchestrator | 2025-09-17 16:14:34 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:34.195419 | orchestrator | 2025-09-17 16:14:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:37.245377 | orchestrator | 2025-09-17 16:14:37 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:37.246660 | orchestrator | 2025-09-17 16:14:37 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:37.248497 | orchestrator | 2025-09-17 16:14:37 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:37.248526 | orchestrator | 2025-09-17 16:14:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:40.295795 | orchestrator | 2025-09-17 16:14:40 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:40.296408 | orchestrator | 2025-09-17 16:14:40 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:40.297713 | orchestrator | 2025-09-17 16:14:40 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:40.297766 | orchestrator | 2025-09-17 16:14:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:43.343647 | orchestrator | 2025-09-17 16:14:43 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state STARTED 2025-09-17 16:14:43.343747 | orchestrator | 2025-09-17 16:14:43 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:43.344119 | orchestrator | 2025-09-17 16:14:43 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:43.344143 | orchestrator | 2025-09-17 16:14:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:46.387459 | orchestrator | 2025-09-17 16:14:46 | INFO  | Task cbbbd74e-7b4b-4813-90fd-4aa9205a2fc2 is in state SUCCESS 2025-09-17 16:14:46.389908 | orchestrator | 2025-09-17 16:14:46.389961 | orchestrator | 2025-09-17 16:14:46.389974 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-17 16:14:46.389986 | orchestrator | 2025-09-17 16:14:46.389997 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-17 16:14:46.390008 | orchestrator | Wednesday 17 September 2025 16:12:35 +0000 (0:00:00.601) 0:00:00.601 *** 2025-09-17 16:14:46.390072 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:14:46.390087 | orchestrator | 2025-09-17 16:14:46.390097 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-17 16:14:46.390108 | orchestrator | Wednesday 17 September 2025 16:12:36 +0000 (0:00:00.597) 0:00:01.199 *** 2025-09-17 16:14:46.390119 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.390131 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.390142 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.390152 | orchestrator | 2025-09-17 16:14:46.390164 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-17 16:14:46.390175 | orchestrator | Wednesday 17 September 2025 16:12:36 +0000 (0:00:00.656) 0:00:01.856 *** 2025-09-17 16:14:46.390620 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.390633 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.390644 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.390654 | orchestrator | 2025-09-17 16:14:46.390665 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-17 16:14:46.390676 | orchestrator | Wednesday 17 September 2025 16:12:37 +0000 (0:00:00.279) 0:00:02.136 *** 2025-09-17 16:14:46.390686 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.390697 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.390707 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.390718 | orchestrator | 2025-09-17 16:14:46.390728 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-17 16:14:46.390739 | orchestrator | Wednesday 17 September 2025 16:12:37 +0000 (0:00:00.784) 0:00:02.921 *** 2025-09-17 16:14:46.390750 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.390760 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.390771 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.390839 | orchestrator | 2025-09-17 16:14:46.391099 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-17 16:14:46.391116 | orchestrator | Wednesday 17 September 2025 16:12:38 +0000 (0:00:00.294) 0:00:03.215 *** 2025-09-17 16:14:46.391127 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.391138 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.391149 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.391159 | orchestrator | 2025-09-17 16:14:46.391170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-17 16:14:46.391181 | orchestrator | Wednesday 17 September 2025 16:12:38 +0000 (0:00:00.301) 0:00:03.516 *** 2025-09-17 16:14:46.391278 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.391294 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.391317 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.391328 | orchestrator | 2025-09-17 16:14:46.391339 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-17 16:14:46.391350 | orchestrator | Wednesday 17 September 2025 16:12:38 +0000 (0:00:00.295) 0:00:03.812 *** 2025-09-17 16:14:46.391361 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.391373 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.391383 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.391394 | orchestrator | 2025-09-17 16:14:46.391421 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-17 16:14:46.391433 | orchestrator | Wednesday 17 September 2025 16:12:39 +0000 (0:00:00.448) 0:00:04.261 *** 2025-09-17 16:14:46.391465 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.391477 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.391487 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.391498 | orchestrator | 2025-09-17 16:14:46.391508 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-17 16:14:46.391519 | orchestrator | Wednesday 17 September 2025 16:12:39 +0000 (0:00:00.287) 0:00:04.548 *** 2025-09-17 16:14:46.391530 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 16:14:46.391540 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:14:46.391551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:14:46.391561 | orchestrator | 2025-09-17 16:14:46.391572 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-17 16:14:46.391583 | orchestrator | Wednesday 17 September 2025 16:12:40 +0000 (0:00:00.593) 0:00:05.142 *** 2025-09-17 16:14:46.391593 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.391604 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.391615 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.391625 | orchestrator | 2025-09-17 16:14:46.391636 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-17 16:14:46.391646 | orchestrator | Wednesday 17 September 2025 16:12:40 +0000 (0:00:00.427) 0:00:05.569 *** 2025-09-17 16:14:46.391657 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 16:14:46.391668 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:14:46.391678 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:14:46.391689 | orchestrator | 2025-09-17 16:14:46.391699 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-17 16:14:46.391710 | orchestrator | Wednesday 17 September 2025 16:12:42 +0000 (0:00:02.156) 0:00:07.726 *** 2025-09-17 16:14:46.391721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 16:14:46.391732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 16:14:46.391742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 16:14:46.391753 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.391763 | orchestrator | 2025-09-17 16:14:46.391774 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-17 16:14:46.391830 | orchestrator | Wednesday 17 September 2025 16:12:43 +0000 (0:00:00.388) 0:00:08.114 *** 2025-09-17 16:14:46.391846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.391860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.391872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.391885 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.391899 | orchestrator | 2025-09-17 16:14:46.391912 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-17 16:14:46.391926 | orchestrator | Wednesday 17 September 2025 16:12:43 +0000 (0:00:00.774) 0:00:08.889 *** 2025-09-17 16:14:46.391941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.391965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.391997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.392011 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392023 | orchestrator | 2025-09-17 16:14:46.392035 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-17 16:14:46.392047 | orchestrator | Wednesday 17 September 2025 16:12:44 +0000 (0:00:00.152) 0:00:09.041 *** 2025-09-17 16:14:46.392062 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '759bfa46d343', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-17 16:12:41.248756', 'end': '2025-09-17 16:12:41.283279', 'delta': '0:00:00.034523', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['759bfa46d343'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-17 16:14:46.392079 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'effa9eae06c9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-17 16:12:41.995685', 'end': '2025-09-17 16:12:42.036189', 'delta': '0:00:00.040504', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['effa9eae06c9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-17 16:14:46.392125 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e957fc642e79', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-17 16:12:42.522669', 'end': '2025-09-17 16:12:42.558578', 'delta': '0:00:00.035909', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e957fc642e79'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-17 16:14:46.392139 | orchestrator | 2025-09-17 16:14:46.392152 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-17 16:14:46.392164 | orchestrator | Wednesday 17 September 2025 16:12:44 +0000 (0:00:00.352) 0:00:09.393 *** 2025-09-17 16:14:46.392176 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.392189 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.392208 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.392220 | orchestrator | 2025-09-17 16:14:46.392289 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-17 16:14:46.392300 | orchestrator | Wednesday 17 September 2025 16:12:44 +0000 (0:00:00.433) 0:00:09.827 *** 2025-09-17 16:14:46.392311 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-17 16:14:46.392322 | orchestrator | 2025-09-17 16:14:46.392332 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-17 16:14:46.392343 | orchestrator | Wednesday 17 September 2025 16:12:46 +0000 (0:00:01.807) 0:00:11.634 *** 2025-09-17 16:14:46.392354 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392364 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392375 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392386 | orchestrator | 2025-09-17 16:14:46.392396 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-17 16:14:46.392407 | orchestrator | Wednesday 17 September 2025 16:12:46 +0000 (0:00:00.265) 0:00:11.899 *** 2025-09-17 16:14:46.392418 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392428 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392439 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392450 | orchestrator | 2025-09-17 16:14:46.392460 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 16:14:46.392471 | orchestrator | Wednesday 17 September 2025 16:12:47 +0000 (0:00:00.383) 0:00:12.282 *** 2025-09-17 16:14:46.392482 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392491 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392505 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392515 | orchestrator | 2025-09-17 16:14:46.392525 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-17 16:14:46.392535 | orchestrator | Wednesday 17 September 2025 16:12:47 +0000 (0:00:00.453) 0:00:12.736 *** 2025-09-17 16:14:46.392544 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.392554 | orchestrator | 2025-09-17 16:14:46.392563 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-17 16:14:46.392573 | orchestrator | Wednesday 17 September 2025 16:12:47 +0000 (0:00:00.130) 0:00:12.866 *** 2025-09-17 16:14:46.392582 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392591 | orchestrator | 2025-09-17 16:14:46.392601 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-17 16:14:46.392611 | orchestrator | Wednesday 17 September 2025 16:12:48 +0000 (0:00:00.219) 0:00:13.085 *** 2025-09-17 16:14:46.392620 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392630 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392639 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392648 | orchestrator | 2025-09-17 16:14:46.392658 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-17 16:14:46.392668 | orchestrator | Wednesday 17 September 2025 16:12:48 +0000 (0:00:00.293) 0:00:13.379 *** 2025-09-17 16:14:46.392677 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392686 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392696 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392705 | orchestrator | 2025-09-17 16:14:46.392715 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-17 16:14:46.392724 | orchestrator | Wednesday 17 September 2025 16:12:48 +0000 (0:00:00.309) 0:00:13.688 *** 2025-09-17 16:14:46.392734 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392744 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392753 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392763 | orchestrator | 2025-09-17 16:14:46.392772 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-17 16:14:46.392781 | orchestrator | Wednesday 17 September 2025 16:12:49 +0000 (0:00:00.454) 0:00:14.142 *** 2025-09-17 16:14:46.392791 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392806 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392816 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392825 | orchestrator | 2025-09-17 16:14:46.392835 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-17 16:14:46.392844 | orchestrator | Wednesday 17 September 2025 16:12:49 +0000 (0:00:00.298) 0:00:14.440 *** 2025-09-17 16:14:46.392854 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392863 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392873 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392882 | orchestrator | 2025-09-17 16:14:46.392892 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-17 16:14:46.392901 | orchestrator | Wednesday 17 September 2025 16:12:49 +0000 (0:00:00.318) 0:00:14.759 *** 2025-09-17 16:14:46.392911 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.392920 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.392930 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.392939 | orchestrator | 2025-09-17 16:14:46.392949 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-17 16:14:46.392988 | orchestrator | Wednesday 17 September 2025 16:12:50 +0000 (0:00:00.319) 0:00:15.078 *** 2025-09-17 16:14:46.393000 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.393009 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.393019 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.393028 | orchestrator | 2025-09-17 16:14:46.393037 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-17 16:14:46.393047 | orchestrator | Wednesday 17 September 2025 16:12:50 +0000 (0:00:00.482) 0:00:15.561 *** 2025-09-17 16:14:46.393058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79', 'dm-uuid-LVM-6RyMlMdjeOp7j1vRqfxRAJS3ApVXn13X2Vfadb7vhG6ge7Y1r6yBKNgM18gGcW0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c', 'dm-uuid-LVM-vIyaoAaQU4BLTggnPtIfxsEZD3fWK7cz6KBYfcsu0o52AotTNOuzw91MCFv9KHzh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZtIiw-R8yr-uRlx-X2bF-nyyO-Cudf-jRf67i', 'scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560', 'scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RWWDaA-yPOu-TiwM-4vDa-dfQY-ugWA-9ZlceI', 'scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53', 'scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4', 'dm-uuid-LVM-PdYNM3UuBlXGJqwN3in7M0c9PsPKTYErhy6wKZPnL1bwjK9oynUdSPDssfTgaOFP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372', 'scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a', 'dm-uuid-LVM-n16DXE8IHM2auFI8fe4U37eK6xVMKQZubvLjSeU2AHjqqqODc6Exx2jcALvDasBJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393501 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.393540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133', 'dm-uuid-LVM-X4M3ygsjIklutR4Bq0CdRZnkK8fpGU3dCbXr4lylFfSoFJ6SpSoOwzsfV30i5M00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3', 'dm-uuid-LVM-sj6dpbc449zUgbdRNYvEkSmp7ingtmE2YrK5U3Jm1Y4fwH5Jc0803iMSz1cWO7kv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lZa7Rh-i3Rn-xMzW-Vlv1-fNcw-aM2A-R4MUAR', 'scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2', 'scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cs4drC-MDE3-7Bth-4yzp-cd2h-cv6K-SUr5e3', 'scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae', 'scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d', 'scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393743 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.393753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-17 16:14:46.393824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cohLej-sudo-7eKj-PPrS-63UL-P3Oi-F37loG', 'scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16', 'scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-254YoQ-Eg1l-2K9c-2pur-dTIZ-nJKU-cdfDuc', 'scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3', 'scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae', 'scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-17 16:14:46.393892 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.393902 | orchestrator | 2025-09-17 16:14:46.393912 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-17 16:14:46.393922 | orchestrator | Wednesday 17 September 2025 16:12:51 +0000 (0:00:00.497) 0:00:16.059 *** 2025-09-17 16:14:46.393932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79', 'dm-uuid-LVM-6RyMlMdjeOp7j1vRqfxRAJS3ApVXn13X2Vfadb7vhG6ge7Y1r6yBKNgM18gGcW0H'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.393947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c', 'dm-uuid-LVM-vIyaoAaQU4BLTggnPtIfxsEZD3fWK7cz6KBYfcsu0o52AotTNOuzw91MCFv9KHzh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.393963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.393973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.393984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394058 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4', 'dm-uuid-LVM-PdYNM3UuBlXGJqwN3in7M0c9PsPKTYErhy6wKZPnL1bwjK9oynUdSPDssfTgaOFP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a', 'dm-uuid-LVM-n16DXE8IHM2auFI8fe4U37eK6xVMKQZubvLjSeU2AHjqqqODc6Exx2jcALvDasBJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16', 'scsi-SQEMU_QEMU_HARDDISK_5198831b-ccf7-4bda-9ab5-c4d193685229-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c66c71d--5352--5b3e--b37c--d5d685617e79-osd--block--3c66c71d--5352--5b3e--b37c--d5d685617e79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZtIiw-R8yr-uRlx-X2bF-nyyO-Cudf-jRf67i', 'scsi-0QEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560', 'scsi-SQEMU_QEMU_HARDDISK_abcb278f-9464-4e60-af45-8a9c7109c560'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394214 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c-osd--block--e55f2ffc--2f4d--55e1--8c19--2e9977a4942c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RWWDaA-yPOu-TiwM-4vDa-dfQY-ugWA-9ZlceI', 'scsi-0QEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53', 'scsi-SQEMU_QEMU_HARDDISK_5c4f947a-1fa9-4d40-922c-b00760e10f53'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372', 'scsi-SQEMU_QEMU_HARDDISK_81270acf-a9ce-49fe-b935-471dffd13372'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394308 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394361 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.394371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133', 'dm-uuid-LVM-X4M3ygsjIklutR4Bq0CdRZnkK8fpGU3dCbXr4lylFfSoFJ6SpSoOwzsfV30i5M00'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba0d02f0-2b9a-4a66-9287-74da8ed1487d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3', 'dm-uuid-LVM-sj6dpbc449zUgbdRNYvEkSmp7ingtmE2YrK5U3Jm1Y4fwH5Jc0803iMSz1cWO7kv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--17f552da--d70b--5fe0--b76a--79be1323ddb4-osd--block--17f552da--d70b--5fe0--b76a--79be1323ddb4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lZa7Rh-i3Rn-xMzW-Vlv1-fNcw-aM2A-R4MUAR', 'scsi-0QEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2', 'scsi-SQEMU_QEMU_HARDDISK_389a752f-d381-48a7-a4b5-e7f86559b7a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d72d4826--7802--5629--b85e--59298af53c3a-osd--block--d72d4826--7802--5629--b85e--59298af53c3a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Cs4drC-MDE3-7Bth-4yzp-cd2h-cv6K-SUr5e3', 'scsi-0QEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae', 'scsi-SQEMU_QEMU_HARDDISK_11507ddf-c78f-4c5a-8643-6bad2a8b39ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394523 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d', 'scsi-SQEMU_QEMU_HARDDISK_ff1b16a2-3e0a-432a-b441-b3fe8b453f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394545 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394555 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.394572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394599 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16', 'scsi-SQEMU_QEMU_HARDDISK_c923d257-7213-4d28-88ba-25ec4a127767-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133-osd--block--2618dc29--ef9a--5981--b8ae--0a6fa7f1f133'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cohLej-sudo-7eKj-PPrS-63UL-P3Oi-F37loG', 'scsi-0QEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16', 'scsi-SQEMU_QEMU_HARDDISK_2998f2ff-923f-4644-b235-1d192431ff16'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ce5409dd--a4db--5391--81df--07600c6136f3-osd--block--ce5409dd--a4db--5391--81df--07600c6136f3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-254YoQ-Eg1l-2K9c-2pur-dTIZ-nJKU-cdfDuc', 'scsi-0QEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3', 'scsi-SQEMU_QEMU_HARDDISK_2c018cbd-00e9-4926-8b68-5b46915e5cd3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae', 'scsi-SQEMU_QEMU_HARDDISK_4c46eecc-90d4-4da2-9e84-51f99bffdbae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-17-15-19-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-17 16:14:46.394716 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.394726 | orchestrator | 2025-09-17 16:14:46.394736 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-17 16:14:46.394746 | orchestrator | Wednesday 17 September 2025 16:12:51 +0000 (0:00:00.617) 0:00:16.677 *** 2025-09-17 16:14:46.394755 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.394765 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.394774 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.394784 | orchestrator | 2025-09-17 16:14:46.394793 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-17 16:14:46.394803 | orchestrator | Wednesday 17 September 2025 16:12:52 +0000 (0:00:00.705) 0:00:17.382 *** 2025-09-17 16:14:46.394811 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.394819 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.394826 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.394834 | orchestrator | 2025-09-17 16:14:46.394841 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 16:14:46.394849 | orchestrator | Wednesday 17 September 2025 16:12:52 +0000 (0:00:00.463) 0:00:17.845 *** 2025-09-17 16:14:46.394857 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.394865 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.394872 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.394880 | orchestrator | 2025-09-17 16:14:46.394888 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 16:14:46.394895 | orchestrator | Wednesday 17 September 2025 16:12:53 +0000 (0:00:00.679) 0:00:18.525 *** 2025-09-17 16:14:46.394903 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.394911 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.394918 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.394926 | orchestrator | 2025-09-17 16:14:46.394934 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-17 16:14:46.394941 | orchestrator | Wednesday 17 September 2025 16:12:53 +0000 (0:00:00.279) 0:00:18.804 *** 2025-09-17 16:14:46.394949 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.394957 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.394964 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.394972 | orchestrator | 2025-09-17 16:14:46.394979 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-17 16:14:46.394987 | orchestrator | Wednesday 17 September 2025 16:12:54 +0000 (0:00:00.412) 0:00:19.217 *** 2025-09-17 16:14:46.394995 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395002 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395014 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.395022 | orchestrator | 2025-09-17 16:14:46.395030 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-17 16:14:46.395037 | orchestrator | Wednesday 17 September 2025 16:12:54 +0000 (0:00:00.477) 0:00:19.694 *** 2025-09-17 16:14:46.395045 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-17 16:14:46.395053 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-17 16:14:46.395061 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-17 16:14:46.395068 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-17 16:14:46.395076 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-17 16:14:46.395084 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-17 16:14:46.395091 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-17 16:14:46.395099 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-17 16:14:46.395123 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-17 16:14:46.395131 | orchestrator | 2025-09-17 16:14:46.395138 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-17 16:14:46.395146 | orchestrator | Wednesday 17 September 2025 16:12:55 +0000 (0:00:00.946) 0:00:20.640 *** 2025-09-17 16:14:46.395154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-17 16:14:46.395161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-17 16:14:46.395169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-17 16:14:46.395177 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-17 16:14:46.395192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-17 16:14:46.395200 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-17 16:14:46.395207 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-17 16:14:46.395222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-17 16:14:46.395246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-17 16:14:46.395254 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.395262 | orchestrator | 2025-09-17 16:14:46.395270 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-17 16:14:46.395277 | orchestrator | Wednesday 17 September 2025 16:12:55 +0000 (0:00:00.340) 0:00:20.981 *** 2025-09-17 16:14:46.395285 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:14:46.395294 | orchestrator | 2025-09-17 16:14:46.395302 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-17 16:14:46.395311 | orchestrator | Wednesday 17 September 2025 16:12:56 +0000 (0:00:00.651) 0:00:21.633 *** 2025-09-17 16:14:46.395319 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395327 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395334 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.395342 | orchestrator | 2025-09-17 16:14:46.395355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-17 16:14:46.395363 | orchestrator | Wednesday 17 September 2025 16:12:56 +0000 (0:00:00.314) 0:00:21.947 *** 2025-09-17 16:14:46.395371 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395378 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395386 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.395394 | orchestrator | 2025-09-17 16:14:46.395401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-17 16:14:46.395409 | orchestrator | Wednesday 17 September 2025 16:12:57 +0000 (0:00:00.304) 0:00:22.252 *** 2025-09-17 16:14:46.395417 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395425 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395432 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:14:46.395440 | orchestrator | 2025-09-17 16:14:46.395448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-17 16:14:46.395455 | orchestrator | Wednesday 17 September 2025 16:12:57 +0000 (0:00:00.313) 0:00:22.566 *** 2025-09-17 16:14:46.395463 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.395471 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.395479 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.395486 | orchestrator | 2025-09-17 16:14:46.395494 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-17 16:14:46.395502 | orchestrator | Wednesday 17 September 2025 16:12:58 +0000 (0:00:00.592) 0:00:23.159 *** 2025-09-17 16:14:46.395509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:14:46.395517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:14:46.395525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:14:46.395538 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395545 | orchestrator | 2025-09-17 16:14:46.395553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-17 16:14:46.395561 | orchestrator | Wednesday 17 September 2025 16:12:58 +0000 (0:00:00.380) 0:00:23.540 *** 2025-09-17 16:14:46.395569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:14:46.395577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:14:46.395584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:14:46.395592 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395600 | orchestrator | 2025-09-17 16:14:46.395607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-17 16:14:46.395615 | orchestrator | Wednesday 17 September 2025 16:12:58 +0000 (0:00:00.395) 0:00:23.935 *** 2025-09-17 16:14:46.395623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-17 16:14:46.395630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-17 16:14:46.395641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-17 16:14:46.395649 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395657 | orchestrator | 2025-09-17 16:14:46.395665 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-17 16:14:46.395672 | orchestrator | Wednesday 17 September 2025 16:12:59 +0000 (0:00:00.330) 0:00:24.266 *** 2025-09-17 16:14:46.395680 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:14:46.395688 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:14:46.395695 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:14:46.395703 | orchestrator | 2025-09-17 16:14:46.395711 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-17 16:14:46.395718 | orchestrator | Wednesday 17 September 2025 16:12:59 +0000 (0:00:00.334) 0:00:24.601 *** 2025-09-17 16:14:46.395726 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-17 16:14:46.395734 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-17 16:14:46.395742 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-17 16:14:46.395749 | orchestrator | 2025-09-17 16:14:46.395757 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-17 16:14:46.395765 | orchestrator | Wednesday 17 September 2025 16:13:00 +0000 (0:00:00.475) 0:00:25.076 *** 2025-09-17 16:14:46.395772 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 16:14:46.395780 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:14:46.395788 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:14:46.395796 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 16:14:46.395803 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 16:14:46.395811 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 16:14:46.395819 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 16:14:46.395826 | orchestrator | 2025-09-17 16:14:46.395834 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-17 16:14:46.395842 | orchestrator | Wednesday 17 September 2025 16:13:00 +0000 (0:00:00.820) 0:00:25.896 *** 2025-09-17 16:14:46.395850 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-17 16:14:46.395857 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-17 16:14:46.395865 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-17 16:14:46.395873 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-17 16:14:46.395881 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-17 16:14:46.395888 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-17 16:14:46.395901 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-17 16:14:46.395908 | orchestrator | 2025-09-17 16:14:46.395920 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-17 16:14:46.395928 | orchestrator | Wednesday 17 September 2025 16:13:02 +0000 (0:00:01.548) 0:00:27.445 *** 2025-09-17 16:14:46.395936 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:14:46.395944 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:14:46.395952 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-17 16:14:46.395959 | orchestrator | 2025-09-17 16:14:46.395967 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-17 16:14:46.395975 | orchestrator | Wednesday 17 September 2025 16:13:02 +0000 (0:00:00.319) 0:00:27.765 *** 2025-09-17 16:14:46.395983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:14:46.395992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:14:46.396000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:14:46.396008 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:14:46.396020 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-17 16:14:46.396028 | orchestrator | 2025-09-17 16:14:46.396036 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-17 16:14:46.396044 | orchestrator | Wednesday 17 September 2025 16:13:50 +0000 (0:00:47.510) 0:01:15.276 *** 2025-09-17 16:14:46.396051 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396059 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396082 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396089 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396097 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-17 16:14:46.396105 | orchestrator | 2025-09-17 16:14:46.396112 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-17 16:14:46.396120 | orchestrator | Wednesday 17 September 2025 16:14:15 +0000 (0:00:25.097) 0:01:40.373 *** 2025-09-17 16:14:46.396128 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396149 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396157 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396172 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396180 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-17 16:14:46.396188 | orchestrator | 2025-09-17 16:14:46.396195 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-17 16:14:46.396203 | orchestrator | Wednesday 17 September 2025 16:14:27 +0000 (0:00:12.608) 0:01:52.981 *** 2025-09-17 16:14:46.396211 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396218 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396241 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396249 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396257 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396265 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396277 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396285 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396293 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396301 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396308 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396316 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396324 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396331 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396339 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396347 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-17 16:14:46.396354 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-17 16:14:46.396362 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-17 16:14:46.396370 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-17 16:14:46.396377 | orchestrator | 2025-09-17 16:14:46.396385 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:14:46.396393 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-17 16:14:46.396402 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-17 16:14:46.396410 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-17 16:14:46.396417 | orchestrator | 2025-09-17 16:14:46.396425 | orchestrator | 2025-09-17 16:14:46.396433 | orchestrator | 2025-09-17 16:14:46.396440 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:14:46.396448 | orchestrator | Wednesday 17 September 2025 16:14:45 +0000 (0:00:17.975) 0:02:10.957 *** 2025-09-17 16:14:46.396456 | orchestrator | =============================================================================== 2025-09-17 16:14:46.396467 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.51s 2025-09-17 16:14:46.396485 | orchestrator | generate keys ---------------------------------------------------------- 25.10s 2025-09-17 16:14:46.396493 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2025-09-17 16:14:46.396500 | orchestrator | get keys from monitors ------------------------------------------------- 12.61s 2025-09-17 16:14:46.396508 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.16s 2025-09-17 16:14:46.396516 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.81s 2025-09-17 16:14:46.396524 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.55s 2025-09-17 16:14:46.396531 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.95s 2025-09-17 16:14:46.396539 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.82s 2025-09-17 16:14:46.396547 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-09-17 16:14:46.396554 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-09-17 16:14:46.396562 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2025-09-17 16:14:46.396570 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-09-17 16:14:46.396577 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-09-17 16:14:46.396585 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2025-09-17 16:14:46.396592 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2025-09-17 16:14:46.396600 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.60s 2025-09-17 16:14:46.396608 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2025-09-17 16:14:46.396616 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-09-17 16:14:46.396623 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.50s 2025-09-17 16:14:46.396631 | orchestrator | 2025-09-17 16:14:46 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:46.396639 | orchestrator | 2025-09-17 16:14:46 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:46.396647 | orchestrator | 2025-09-17 16:14:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:49.445842 | orchestrator | 2025-09-17 16:14:49 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:49.448383 | orchestrator | 2025-09-17 16:14:49 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:49.450898 | orchestrator | 2025-09-17 16:14:49 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:14:49.451183 | orchestrator | 2025-09-17 16:14:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:52.501129 | orchestrator | 2025-09-17 16:14:52 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:52.502081 | orchestrator | 2025-09-17 16:14:52 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:52.502735 | orchestrator | 2025-09-17 16:14:52 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:14:52.502758 | orchestrator | 2025-09-17 16:14:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:55.545357 | orchestrator | 2025-09-17 16:14:55 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:55.546132 | orchestrator | 2025-09-17 16:14:55 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:55.547362 | orchestrator | 2025-09-17 16:14:55 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:14:55.547428 | orchestrator | 2025-09-17 16:14:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:14:58.600783 | orchestrator | 2025-09-17 16:14:58 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:14:58.603193 | orchestrator | 2025-09-17 16:14:58 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:14:58.605010 | orchestrator | 2025-09-17 16:14:58 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:14:58.605352 | orchestrator | 2025-09-17 16:14:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:01.659178 | orchestrator | 2025-09-17 16:15:01 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:01.661123 | orchestrator | 2025-09-17 16:15:01 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:15:01.663576 | orchestrator | 2025-09-17 16:15:01 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:15:01.663642 | orchestrator | 2025-09-17 16:15:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:04.709970 | orchestrator | 2025-09-17 16:15:04 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:04.712224 | orchestrator | 2025-09-17 16:15:04 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:15:04.713782 | orchestrator | 2025-09-17 16:15:04 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:15:04.713825 | orchestrator | 2025-09-17 16:15:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:07.752389 | orchestrator | 2025-09-17 16:15:07 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:07.753316 | orchestrator | 2025-09-17 16:15:07 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:15:07.754903 | orchestrator | 2025-09-17 16:15:07 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:15:07.755423 | orchestrator | 2025-09-17 16:15:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:10.791893 | orchestrator | 2025-09-17 16:15:10 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:10.794486 | orchestrator | 2025-09-17 16:15:10 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state STARTED 2025-09-17 16:15:10.795384 | orchestrator | 2025-09-17 16:15:10 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:15:10.795410 | orchestrator | 2025-09-17 16:15:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:13.838293 | orchestrator | 2025-09-17 16:15:13 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:13.841605 | orchestrator | 2025-09-17 16:15:13 | INFO  | Task 4cb1d04b-23de-4dd6-bfd6-78ce4a0e2966 is in state SUCCESS 2025-09-17 16:15:13.843568 | orchestrator | 2025-09-17 16:15:13.843609 | orchestrator | 2025-09-17 16:15:13.843681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:15:13.843697 | orchestrator | 2025-09-17 16:15:13.843896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:15:13.843911 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.255) 0:00:00.255 *** 2025-09-17 16:15:13.843923 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.843936 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.843948 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.844033 | orchestrator | 2025-09-17 16:15:13.844051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:15:13.844062 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.295) 0:00:00.551 *** 2025-09-17 16:15:13.844096 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-17 16:15:13.844108 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-17 16:15:13.844119 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-17 16:15:13.844129 | orchestrator | 2025-09-17 16:15:13.844140 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-17 16:15:13.844151 | orchestrator | 2025-09-17 16:15:13.844161 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 16:15:13.844172 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.400) 0:00:00.952 *** 2025-09-17 16:15:13.844183 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:15:13.844194 | orchestrator | 2025-09-17 16:15:13.844205 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-17 16:15:13.844216 | orchestrator | Wednesday 17 September 2025 16:13:31 +0000 (0:00:00.518) 0:00:01.470 *** 2025-09-17 16:15:13.844309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.844346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.844376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.844389 | orchestrator | 2025-09-17 16:15:13.844400 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-17 16:15:13.844411 | orchestrator | Wednesday 17 September 2025 16:13:32 +0000 (0:00:01.104) 0:00:02.574 *** 2025-09-17 16:15:13.844422 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.844433 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.844444 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.844455 | orchestrator | 2025-09-17 16:15:13.844472 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 16:15:13.844483 | orchestrator | Wednesday 17 September 2025 16:13:32 +0000 (0:00:00.418) 0:00:02.992 *** 2025-09-17 16:15:13.844502 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 16:15:13.844514 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 16:15:13.844525 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 16:15:13.844536 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 16:15:13.844546 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 16:15:13.844557 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 16:15:13.844567 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-17 16:15:13.844579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 16:15:13.844589 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 16:15:13.844600 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 16:15:13.844610 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 16:15:13.844621 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 16:15:13.844632 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 16:15:13.844642 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 16:15:13.844653 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-17 16:15:13.844664 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 16:15:13.844677 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-17 16:15:13.844689 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-17 16:15:13.844702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-17 16:15:13.844714 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-17 16:15:13.844726 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-17 16:15:13.844738 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-17 16:15:13.844750 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-17 16:15:13.844763 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-17 16:15:13.844776 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-17 16:15:13.844788 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-17 16:15:13.844804 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-17 16:15:13.844815 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-17 16:15:13.844826 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-17 16:15:13.844837 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-17 16:15:13.844953 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-17 16:15:13.844964 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-17 16:15:13.844975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-17 16:15:13.844987 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-17 16:15:13.844998 | orchestrator | 2025-09-17 16:15:13.845008 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845019 | orchestrator | Wednesday 17 September 2025 16:13:33 +0000 (0:00:00.746) 0:00:03.739 *** 2025-09-17 16:15:13.845030 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845041 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845051 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845062 | orchestrator | 2025-09-17 16:15:13.845072 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.845083 | orchestrator | Wednesday 17 September 2025 16:13:33 +0000 (0:00:00.272) 0:00:04.012 *** 2025-09-17 16:15:13.845100 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845112 | orchestrator | 2025-09-17 16:15:13.845123 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.845134 | orchestrator | Wednesday 17 September 2025 16:13:33 +0000 (0:00:00.125) 0:00:04.138 *** 2025-09-17 16:15:13.845144 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845155 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.845166 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.845176 | orchestrator | 2025-09-17 16:15:13.845187 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845198 | orchestrator | Wednesday 17 September 2025 16:13:34 +0000 (0:00:00.446) 0:00:04.584 *** 2025-09-17 16:15:13.845209 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845219 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845230 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845261 | orchestrator | 2025-09-17 16:15:13.845272 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.845283 | orchestrator | Wednesday 17 September 2025 16:13:34 +0000 (0:00:00.301) 0:00:04.886 *** 2025-09-17 16:15:13.845294 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845304 | orchestrator | 2025-09-17 16:15:13.845315 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.845326 | orchestrator | Wednesday 17 September 2025 16:13:34 +0000 (0:00:00.131) 0:00:05.018 *** 2025-09-17 16:15:13.845337 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845348 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.845358 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.845369 | orchestrator | 2025-09-17 16:15:13.845380 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845390 | orchestrator | Wednesday 17 September 2025 16:13:35 +0000 (0:00:00.276) 0:00:05.294 *** 2025-09-17 16:15:13.845401 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845412 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845423 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845433 | orchestrator | 2025-09-17 16:15:13.845444 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.845455 | orchestrator | Wednesday 17 September 2025 16:13:35 +0000 (0:00:00.294) 0:00:05.588 *** 2025-09-17 16:15:13.845465 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845476 | orchestrator | 2025-09-17 16:15:13.845487 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.845504 | orchestrator | Wednesday 17 September 2025 16:13:35 +0000 (0:00:00.274) 0:00:05.863 *** 2025-09-17 16:15:13.845515 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845526 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.845536 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.845547 | orchestrator | 2025-09-17 16:15:13.845558 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845570 | orchestrator | Wednesday 17 September 2025 16:13:36 +0000 (0:00:00.329) 0:00:06.193 *** 2025-09-17 16:15:13.845582 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845593 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845606 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845617 | orchestrator | 2025-09-17 16:15:13.845630 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.845642 | orchestrator | Wednesday 17 September 2025 16:13:36 +0000 (0:00:00.288) 0:00:06.481 *** 2025-09-17 16:15:13.845653 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845665 | orchestrator | 2025-09-17 16:15:13.845677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.845689 | orchestrator | Wednesday 17 September 2025 16:13:36 +0000 (0:00:00.137) 0:00:06.619 *** 2025-09-17 16:15:13.845708 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845720 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.845731 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.845743 | orchestrator | 2025-09-17 16:15:13.845755 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845767 | orchestrator | Wednesday 17 September 2025 16:13:36 +0000 (0:00:00.281) 0:00:06.900 *** 2025-09-17 16:15:13.845779 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845791 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845803 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845815 | orchestrator | 2025-09-17 16:15:13.845827 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.845839 | orchestrator | Wednesday 17 September 2025 16:13:37 +0000 (0:00:00.450) 0:00:07.350 *** 2025-09-17 16:15:13.845851 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845862 | orchestrator | 2025-09-17 16:15:13.845874 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.845886 | orchestrator | Wednesday 17 September 2025 16:13:37 +0000 (0:00:00.151) 0:00:07.502 *** 2025-09-17 16:15:13.845898 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.845910 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.845922 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.845932 | orchestrator | 2025-09-17 16:15:13.845943 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.845954 | orchestrator | Wednesday 17 September 2025 16:13:37 +0000 (0:00:00.278) 0:00:07.780 *** 2025-09-17 16:15:13.845964 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.845975 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.845986 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.845996 | orchestrator | 2025-09-17 16:15:13.846007 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.846065 | orchestrator | Wednesday 17 September 2025 16:13:37 +0000 (0:00:00.277) 0:00:08.057 *** 2025-09-17 16:15:13.846077 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846088 | orchestrator | 2025-09-17 16:15:13.846098 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.846109 | orchestrator | Wednesday 17 September 2025 16:13:38 +0000 (0:00:00.127) 0:00:08.185 *** 2025-09-17 16:15:13.846120 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846130 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.846141 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.846152 | orchestrator | 2025-09-17 16:15:13.846162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.846181 | orchestrator | Wednesday 17 September 2025 16:13:38 +0000 (0:00:00.428) 0:00:08.613 *** 2025-09-17 16:15:13.846199 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.846210 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.846221 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.846231 | orchestrator | 2025-09-17 16:15:13.846273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.846285 | orchestrator | Wednesday 17 September 2025 16:13:38 +0000 (0:00:00.306) 0:00:08.919 *** 2025-09-17 16:15:13.846296 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846306 | orchestrator | 2025-09-17 16:15:13.846317 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.846328 | orchestrator | Wednesday 17 September 2025 16:13:38 +0000 (0:00:00.126) 0:00:09.046 *** 2025-09-17 16:15:13.846339 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846350 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.846361 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.846372 | orchestrator | 2025-09-17 16:15:13.846383 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.846394 | orchestrator | Wednesday 17 September 2025 16:13:39 +0000 (0:00:00.278) 0:00:09.324 *** 2025-09-17 16:15:13.846405 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.846415 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.846426 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.846437 | orchestrator | 2025-09-17 16:15:13.846448 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.846458 | orchestrator | Wednesday 17 September 2025 16:13:39 +0000 (0:00:00.307) 0:00:09.632 *** 2025-09-17 16:15:13.846470 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846481 | orchestrator | 2025-09-17 16:15:13.846492 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.846503 | orchestrator | Wednesday 17 September 2025 16:13:39 +0000 (0:00:00.137) 0:00:09.769 *** 2025-09-17 16:15:13.846514 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846525 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.846536 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.846546 | orchestrator | 2025-09-17 16:15:13.846557 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.846568 | orchestrator | Wednesday 17 September 2025 16:13:40 +0000 (0:00:00.442) 0:00:10.211 *** 2025-09-17 16:15:13.846579 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.846590 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.846601 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.846611 | orchestrator | 2025-09-17 16:15:13.846622 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.846633 | orchestrator | Wednesday 17 September 2025 16:13:40 +0000 (0:00:00.289) 0:00:10.501 *** 2025-09-17 16:15:13.846644 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846654 | orchestrator | 2025-09-17 16:15:13.846665 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.846676 | orchestrator | Wednesday 17 September 2025 16:13:40 +0000 (0:00:00.172) 0:00:10.674 *** 2025-09-17 16:15:13.846686 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846697 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.846708 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.846718 | orchestrator | 2025-09-17 16:15:13.846729 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-17 16:15:13.846740 | orchestrator | Wednesday 17 September 2025 16:13:40 +0000 (0:00:00.281) 0:00:10.955 *** 2025-09-17 16:15:13.846751 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:15:13.846762 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:15:13.846773 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:15:13.846784 | orchestrator | 2025-09-17 16:15:13.846800 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-17 16:15:13.846811 | orchestrator | Wednesday 17 September 2025 16:13:41 +0000 (0:00:00.462) 0:00:11.418 *** 2025-09-17 16:15:13.846828 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846839 | orchestrator | 2025-09-17 16:15:13.846850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-17 16:15:13.846860 | orchestrator | Wednesday 17 September 2025 16:13:41 +0000 (0:00:00.140) 0:00:11.558 *** 2025-09-17 16:15:13.846871 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.846882 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.846893 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.846904 | orchestrator | 2025-09-17 16:15:13.846915 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-17 16:15:13.846926 | orchestrator | Wednesday 17 September 2025 16:13:41 +0000 (0:00:00.277) 0:00:11.836 *** 2025-09-17 16:15:13.846937 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:15:13.846948 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:15:13.846958 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:15:13.846969 | orchestrator | 2025-09-17 16:15:13.846980 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-17 16:15:13.846991 | orchestrator | Wednesday 17 September 2025 16:13:43 +0000 (0:00:01.623) 0:00:13.460 *** 2025-09-17 16:15:13.847001 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 16:15:13.847012 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 16:15:13.847023 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-17 16:15:13.847034 | orchestrator | 2025-09-17 16:15:13.847044 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-17 16:15:13.847055 | orchestrator | Wednesday 17 September 2025 16:13:45 +0000 (0:00:01.979) 0:00:15.439 *** 2025-09-17 16:15:13.847066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 16:15:13.847077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 16:15:13.847088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-17 16:15:13.847099 | orchestrator | 2025-09-17 16:15:13.847115 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-17 16:15:13.847127 | orchestrator | Wednesday 17 September 2025 16:13:47 +0000 (0:00:02.185) 0:00:17.625 *** 2025-09-17 16:15:13.847138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 16:15:13.847148 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 16:15:13.847159 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-17 16:15:13.847170 | orchestrator | 2025-09-17 16:15:13.847181 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-17 16:15:13.847192 | orchestrator | Wednesday 17 September 2025 16:13:48 +0000 (0:00:01.527) 0:00:19.152 *** 2025-09-17 16:15:13.847202 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.847213 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.847224 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.847234 | orchestrator | 2025-09-17 16:15:13.847263 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-17 16:15:13.847274 | orchestrator | Wednesday 17 September 2025 16:13:49 +0000 (0:00:00.295) 0:00:19.448 *** 2025-09-17 16:15:13.847285 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.847296 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.847306 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.847317 | orchestrator | 2025-09-17 16:15:13.847328 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 16:15:13.847339 | orchestrator | Wednesday 17 September 2025 16:13:49 +0000 (0:00:00.273) 0:00:19.721 *** 2025-09-17 16:15:13.847356 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:15:13.847367 | orchestrator | 2025-09-17 16:15:13.847378 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-17 16:15:13.847389 | orchestrator | Wednesday 17 September 2025 16:13:50 +0000 (0:00:00.821) 0:00:20.543 *** 2025-09-17 16:15:13.847414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847476 | orchestrator | 2025-09-17 16:15:13.847487 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-17 16:15:13.847498 | orchestrator | Wednesday 17 September 2025 16:13:51 +0000 (0:00:01.540) 0:00:22.083 *** 2025-09-17 16:15:13.847518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847541 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.847566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847579 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.847595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847613 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.847623 | orchestrator | 2025-09-17 16:15:13.847634 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-17 16:15:13.847645 | orchestrator | Wednesday 17 September 2025 16:13:52 +0000 (0:00:00.700) 0:00:22.784 *** 2025-09-17 16:15:13.847663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847681 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.847698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847711 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.847731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-17 16:15:13.847749 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.847760 | orchestrator | 2025-09-17 16:15:13.847770 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-17 16:15:13.847781 | orchestrator | Wednesday 17 September 2025 16:13:53 +0000 (0:00:01.189) 0:00:23.974 *** 2025-09-17 16:15:13.847798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-17 16:15:13.847855 | orchestrator | 2025-09-17 16:15:13.847866 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 16:15:13.847877 | orchestrator | Wednesday 17 September 2025 16:13:55 +0000 (0:00:01.377) 0:00:25.351 *** 2025-09-17 16:15:13.847888 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:15:13.847898 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:15:13.847909 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:15:13.847919 | orchestrator | 2025-09-17 16:15:13.847930 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-17 16:15:13.847947 | orchestrator | Wednesday 17 September 2025 16:13:55 +0000 (0:00:00.279) 0:00:25.631 *** 2025-09-17 16:15:13.847968 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:15:13.847979 | orchestrator | 2025-09-17 16:15:13.847990 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-17 16:15:13.848001 | orchestrator | Wednesday 17 September 2025 16:13:56 +0000 (0:00:00.563) 0:00:26.194 *** 2025-09-17 16:15:13.848011 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:15:13.848022 | orchestrator | 2025-09-17 16:15:13.848033 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-17 16:15:13.848043 | orchestrator | Wednesday 17 September 2025 16:13:58 +0000 (0:00:02.188) 0:00:28.382 *** 2025-09-17 16:15:13.848054 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:15:13.848065 | orchestrator | 2025-09-17 16:15:13.848075 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-17 16:15:13.848086 | orchestrator | Wednesday 17 September 2025 16:14:00 +0000 (0:00:02.234) 0:00:30.617 *** 2025-09-17 16:15:13.848096 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:15:13.848107 | orchestrator | 2025-09-17 16:15:13.848119 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 16:15:13.848129 | orchestrator | Wednesday 17 September 2025 16:14:16 +0000 (0:00:15.902) 0:00:46.519 *** 2025-09-17 16:15:13.848140 | orchestrator | 2025-09-17 16:15:13.848151 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 16:15:13.848161 | orchestrator | Wednesday 17 September 2025 16:14:16 +0000 (0:00:00.083) 0:00:46.603 *** 2025-09-17 16:15:13.848172 | orchestrator | 2025-09-17 16:15:13.848183 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-17 16:15:13.848193 | orchestrator | Wednesday 17 September 2025 16:14:16 +0000 (0:00:00.074) 0:00:46.677 *** 2025-09-17 16:15:13.848204 | orchestrator | 2025-09-17 16:15:13.848215 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-17 16:15:13.848226 | orchestrator | Wednesday 17 September 2025 16:14:16 +0000 (0:00:00.063) 0:00:46.741 *** 2025-09-17 16:15:13.848253 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:15:13.848265 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:15:13.848276 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:15:13.848287 | orchestrator | 2025-09-17 16:15:13.848297 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:15:13.848308 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-17 16:15:13.848319 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-17 16:15:13.848330 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-17 16:15:13.848341 | orchestrator | 2025-09-17 16:15:13.848352 | orchestrator | 2025-09-17 16:15:13.848362 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:15:13.848373 | orchestrator | Wednesday 17 September 2025 16:15:13 +0000 (0:00:56.913) 0:01:43.654 *** 2025-09-17 16:15:13.848384 | orchestrator | =============================================================================== 2025-09-17 16:15:13.848395 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.91s 2025-09-17 16:15:13.848410 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.90s 2025-09-17 16:15:13.848421 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.23s 2025-09-17 16:15:13.848431 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.19s 2025-09-17 16:15:13.848442 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.19s 2025-09-17 16:15:13.848453 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2025-09-17 16:15:13.848464 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.62s 2025-09-17 16:15:13.848481 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.54s 2025-09-17 16:15:13.848491 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.53s 2025-09-17 16:15:13.848502 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.38s 2025-09-17 16:15:13.848513 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.19s 2025-09-17 16:15:13.848523 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2025-09-17 16:15:13.848534 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.82s 2025-09-17 16:15:13.848545 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-09-17 16:15:13.848555 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2025-09-17 16:15:13.848566 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-09-17 16:15:13.848577 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-17 16:15:13.848588 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-09-17 16:15:13.848598 | orchestrator | horizon : Update policy file name --------------------------------------- 0.45s 2025-09-17 16:15:13.848609 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2025-09-17 16:15:13.848620 | orchestrator | 2025-09-17 16:15:13 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state STARTED 2025-09-17 16:15:13.848636 | orchestrator | 2025-09-17 16:15:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:16.896923 | orchestrator | 2025-09-17 16:15:16 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:16.897348 | orchestrator | 2025-09-17 16:15:16 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:16.898880 | orchestrator | 2025-09-17 16:15:16 | INFO  | Task 443840c9-166a-4291-90c7-d47ef2fa27a2 is in state SUCCESS 2025-09-17 16:15:16.899365 | orchestrator | 2025-09-17 16:15:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:19.931837 | orchestrator | 2025-09-17 16:15:19 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:19.932758 | orchestrator | 2025-09-17 16:15:19 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:19.932785 | orchestrator | 2025-09-17 16:15:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:22.962970 | orchestrator | 2025-09-17 16:15:22 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:22.964814 | orchestrator | 2025-09-17 16:15:22 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:22.964869 | orchestrator | 2025-09-17 16:15:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:26.003959 | orchestrator | 2025-09-17 16:15:26 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:26.005496 | orchestrator | 2025-09-17 16:15:26 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:26.005529 | orchestrator | 2025-09-17 16:15:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:29.044797 | orchestrator | 2025-09-17 16:15:29 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:29.047336 | orchestrator | 2025-09-17 16:15:29 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:29.047372 | orchestrator | 2025-09-17 16:15:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:32.089680 | orchestrator | 2025-09-17 16:15:32 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:32.091357 | orchestrator | 2025-09-17 16:15:32 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:32.091415 | orchestrator | 2025-09-17 16:15:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:35.126711 | orchestrator | 2025-09-17 16:15:35 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:35.126791 | orchestrator | 2025-09-17 16:15:35 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:35.126822 | orchestrator | 2025-09-17 16:15:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:38.162664 | orchestrator | 2025-09-17 16:15:38 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:38.164021 | orchestrator | 2025-09-17 16:15:38 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:38.164051 | orchestrator | 2025-09-17 16:15:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:41.208786 | orchestrator | 2025-09-17 16:15:41 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:41.210433 | orchestrator | 2025-09-17 16:15:41 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:41.210456 | orchestrator | 2025-09-17 16:15:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:44.244531 | orchestrator | 2025-09-17 16:15:44 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:44.246560 | orchestrator | 2025-09-17 16:15:44 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:44.246604 | orchestrator | 2025-09-17 16:15:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:47.284789 | orchestrator | 2025-09-17 16:15:47 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:47.286223 | orchestrator | 2025-09-17 16:15:47 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:47.286284 | orchestrator | 2025-09-17 16:15:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:50.328802 | orchestrator | 2025-09-17 16:15:50 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:50.330859 | orchestrator | 2025-09-17 16:15:50 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:50.331035 | orchestrator | 2025-09-17 16:15:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:53.377802 | orchestrator | 2025-09-17 16:15:53 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:53.378485 | orchestrator | 2025-09-17 16:15:53 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:53.378516 | orchestrator | 2025-09-17 16:15:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:56.423178 | orchestrator | 2025-09-17 16:15:56 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:56.424357 | orchestrator | 2025-09-17 16:15:56 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:56.424390 | orchestrator | 2025-09-17 16:15:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:15:59.466787 | orchestrator | 2025-09-17 16:15:59 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:15:59.469014 | orchestrator | 2025-09-17 16:15:59 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:15:59.469054 | orchestrator | 2025-09-17 16:15:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:02.514086 | orchestrator | 2025-09-17 16:16:02 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:16:02.515373 | orchestrator | 2025-09-17 16:16:02 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:16:02.515401 | orchestrator | 2025-09-17 16:16:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:05.564444 | orchestrator | 2025-09-17 16:16:05 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:16:05.567891 | orchestrator | 2025-09-17 16:16:05 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:16:05.567995 | orchestrator | 2025-09-17 16:16:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:08.609369 | orchestrator | 2025-09-17 16:16:08 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state STARTED 2025-09-17 16:16:08.610767 | orchestrator | 2025-09-17 16:16:08 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:16:08.611154 | orchestrator | 2025-09-17 16:16:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:11.655760 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:11.655851 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task a6b6ebbf-f275-4f65-a5bc-9b5aa1755414 is in state SUCCESS 2025-09-17 16:16:11.656809 | orchestrator | 2025-09-17 16:16:11.656839 | orchestrator | 2025-09-17 16:16:11.656852 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-17 16:16:11.656863 | orchestrator | 2025-09-17 16:16:11.656893 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-17 16:16:11.656905 | orchestrator | Wednesday 17 September 2025 16:14:50 +0000 (0:00:00.158) 0:00:00.158 *** 2025-09-17 16:16:11.656916 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-17 16:16:11.656927 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.656938 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.656949 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:16:11.656959 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.656969 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-17 16:16:11.656980 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-17 16:16:11.656990 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-17 16:16:11.657000 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-17 16:16:11.657011 | orchestrator | 2025-09-17 16:16:11.657022 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-17 16:16:11.657033 | orchestrator | Wednesday 17 September 2025 16:14:54 +0000 (0:00:04.198) 0:00:04.356 *** 2025-09-17 16:16:11.657045 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-17 16:16:11.657056 | orchestrator | 2025-09-17 16:16:11.657066 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-17 16:16:11.657077 | orchestrator | Wednesday 17 September 2025 16:14:55 +0000 (0:00:00.978) 0:00:05.335 *** 2025-09-17 16:16:11.657087 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-17 16:16:11.657152 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657163 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657199 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:16:11.657731 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657747 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-17 16:16:11.657758 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-17 16:16:11.657768 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-17 16:16:11.657779 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-17 16:16:11.657789 | orchestrator | 2025-09-17 16:16:11.657800 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-17 16:16:11.657811 | orchestrator | Wednesday 17 September 2025 16:15:08 +0000 (0:00:12.258) 0:00:17.593 *** 2025-09-17 16:16:11.657822 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-17 16:16:11.657833 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657843 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657854 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:16:11.657864 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-17 16:16:11.657874 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-17 16:16:11.657885 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-17 16:16:11.657895 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-17 16:16:11.657906 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-17 16:16:11.657916 | orchestrator | 2025-09-17 16:16:11.657927 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:16:11.657937 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:16:11.657949 | orchestrator | 2025-09-17 16:16:11.657960 | orchestrator | 2025-09-17 16:16:11.657971 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:16:11.657982 | orchestrator | Wednesday 17 September 2025 16:15:13 +0000 (0:00:05.593) 0:00:23.186 *** 2025-09-17 16:16:11.657992 | orchestrator | =============================================================================== 2025-09-17 16:16:11.658003 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.26s 2025-09-17 16:16:11.658014 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.59s 2025-09-17 16:16:11.658083 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.20s 2025-09-17 16:16:11.658094 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2025-09-17 16:16:11.658104 | orchestrator | 2025-09-17 16:16:11.658115 | orchestrator | 2025-09-17 16:16:11.658126 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:16:11.658137 | orchestrator | 2025-09-17 16:16:11.658188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:16:11.658210 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.257) 0:00:00.257 *** 2025-09-17 16:16:11.658221 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.658232 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.658243 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.658253 | orchestrator | 2025-09-17 16:16:11.658264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:16:11.658303 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.297) 0:00:00.555 *** 2025-09-17 16:16:11.658314 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-17 16:16:11.658325 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-17 16:16:11.658347 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-17 16:16:11.658359 | orchestrator | 2025-09-17 16:16:11.658443 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-17 16:16:11.658457 | orchestrator | 2025-09-17 16:16:11.658469 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.658481 | orchestrator | Wednesday 17 September 2025 16:13:30 +0000 (0:00:00.397) 0:00:00.953 *** 2025-09-17 16:16:11.658492 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:16:11.658505 | orchestrator | 2025-09-17 16:16:11.658516 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-17 16:16:11.658528 | orchestrator | Wednesday 17 September 2025 16:13:31 +0000 (0:00:00.540) 0:00:01.494 *** 2025-09-17 16:16:11.658545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.658563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.658621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.658647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.658719 | orchestrator | 2025-09-17 16:16:11.658730 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-17 16:16:11.658747 | orchestrator | Wednesday 17 September 2025 16:13:33 +0000 (0:00:01.734) 0:00:03.229 *** 2025-09-17 16:16:11.658764 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-17 16:16:11.658775 | orchestrator | 2025-09-17 16:16:11.658798 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-17 16:16:11.658809 | orchestrator | Wednesday 17 September 2025 16:13:33 +0000 (0:00:00.802) 0:00:04.031 *** 2025-09-17 16:16:11.658820 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.658831 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.658841 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.658852 | orchestrator | 2025-09-17 16:16:11.658862 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-17 16:16:11.658873 | orchestrator | Wednesday 17 September 2025 16:13:34 +0000 (0:00:00.439) 0:00:04.471 *** 2025-09-17 16:16:11.658883 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:16:11.658894 | orchestrator | 2025-09-17 16:16:11.658904 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.658915 | orchestrator | Wednesday 17 September 2025 16:13:35 +0000 (0:00:00.667) 0:00:05.139 *** 2025-09-17 16:16:11.658925 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:16:11.658936 | orchestrator | 2025-09-17 16:16:11.658946 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-17 16:16:11.658957 | orchestrator | Wednesday 17 September 2025 16:13:35 +0000 (0:00:00.508) 0:00:05.648 *** 2025-09-17 16:16:11.658968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.658981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.658993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659110 | orchestrator | 2025-09-17 16:16:11.659121 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-17 16:16:11.659132 | orchestrator | Wednesday 17 September 2025 16:13:38 +0000 (0:00:03.326) 0:00:08.975 *** 2025-09-17 16:16:11.659155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659191 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.659202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659255 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.659285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659320 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.659331 | orchestrator | 2025-09-17 16:16:11.659351 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-17 16:16:11.659362 | orchestrator | Wednesday 17 September 2025 16:13:39 +0000 (0:00:00.521) 0:00:09.496 *** 2025-09-17 16:16:11.659373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659421 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.659432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659473 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.659496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-17 16:16:11.659508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-17 16:16:11.659530 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.659541 | orchestrator | 2025-09-17 16:16:11.659552 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-17 16:16:11.659562 | orchestrator | Wednesday 17 September 2025 16:13:40 +0000 (0:00:00.739) 0:00:10.236 *** 2025-09-17 16:16:11.659573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659713 | orchestrator | 2025-09-17 16:16:11.659724 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-17 16:16:11.659734 | orchestrator | Wednesday 17 September 2025 16:13:43 +0000 (0:00:03.403) 0:00:13.640 *** 2025-09-17 16:16:11.659746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.659822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.659834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.659876 | orchestrator | 2025-09-17 16:16:11.659887 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-17 16:16:11.659898 | orchestrator | Wednesday 17 September 2025 16:13:48 +0000 (0:00:04.723) 0:00:18.364 *** 2025-09-17 16:16:11.659909 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.659919 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:16:11.659930 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:16:11.659940 | orchestrator | 2025-09-17 16:16:11.659951 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-17 16:16:11.659961 | orchestrator | Wednesday 17 September 2025 16:13:49 +0000 (0:00:01.399) 0:00:19.764 *** 2025-09-17 16:16:11.659972 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.659982 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.660088 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.660104 | orchestrator | 2025-09-17 16:16:11.660114 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-17 16:16:11.660133 | orchestrator | Wednesday 17 September 2025 16:13:50 +0000 (0:00:00.513) 0:00:20.277 *** 2025-09-17 16:16:11.660144 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.660155 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.660172 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.660182 | orchestrator | 2025-09-17 16:16:11.660193 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-17 16:16:11.660203 | orchestrator | Wednesday 17 September 2025 16:13:50 +0000 (0:00:00.370) 0:00:20.647 *** 2025-09-17 16:16:11.660214 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.660224 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.660235 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.660245 | orchestrator | 2025-09-17 16:16:11.660256 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-17 16:16:11.660320 | orchestrator | Wednesday 17 September 2025 16:13:51 +0000 (0:00:00.456) 0:00:21.103 *** 2025-09-17 16:16:11.660335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.660356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.660370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.660383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.660409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.660489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-17 16:16:11.660507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.660519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.660530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.660541 | orchestrator | 2025-09-17 16:16:11.660552 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.660564 | orchestrator | Wednesday 17 September 2025 16:13:53 +0000 (0:00:02.296) 0:00:23.400 *** 2025-09-17 16:16:11.660576 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.660588 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.660599 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.660611 | orchestrator | 2025-09-17 16:16:11.660622 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-17 16:16:11.660634 | orchestrator | Wednesday 17 September 2025 16:13:53 +0000 (0:00:00.297) 0:00:23.697 *** 2025-09-17 16:16:11.660646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 16:16:11.660657 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 16:16:11.660669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-17 16:16:11.660681 | orchestrator | 2025-09-17 16:16:11.660700 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-17 16:16:11.660718 | orchestrator | Wednesday 17 September 2025 16:13:55 +0000 (0:00:01.751) 0:00:25.449 *** 2025-09-17 16:16:11.660734 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:16:11.660744 | orchestrator | 2025-09-17 16:16:11.660753 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-17 16:16:11.660762 | orchestrator | Wednesday 17 September 2025 16:13:56 +0000 (0:00:01.060) 0:00:26.509 *** 2025-09-17 16:16:11.660772 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.660781 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.660790 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.660799 | orchestrator | 2025-09-17 16:16:11.660809 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-17 16:16:11.660818 | orchestrator | Wednesday 17 September 2025 16:13:56 +0000 (0:00:00.459) 0:00:26.969 *** 2025-09-17 16:16:11.660827 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 16:16:11.660837 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 16:16:11.660846 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:16:11.660855 | orchestrator | 2025-09-17 16:16:11.660864 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-17 16:16:11.660874 | orchestrator | Wednesday 17 September 2025 16:13:57 +0000 (0:00:00.842) 0:00:27.811 *** 2025-09-17 16:16:11.660883 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.660893 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.660902 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.660911 | orchestrator | 2025-09-17 16:16:11.660920 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-17 16:16:11.660930 | orchestrator | Wednesday 17 September 2025 16:13:57 +0000 (0:00:00.244) 0:00:28.055 *** 2025-09-17 16:16:11.660939 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 16:16:11.660948 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 16:16:11.660958 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-17 16:16:11.660967 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 16:16:11.660977 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 16:16:11.660987 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-17 16:16:11.660996 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 16:16:11.661006 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 16:16:11.661015 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-17 16:16:11.661024 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 16:16:11.661033 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 16:16:11.661042 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-17 16:16:11.661051 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 16:16:11.661061 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 16:16:11.661070 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-17 16:16:11.661079 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:16:11.661089 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:16:11.661098 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:16:11.661108 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:16:11.661123 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:16:11.661134 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:16:11.661144 | orchestrator | 2025-09-17 16:16:11.661154 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-17 16:16:11.661165 | orchestrator | Wednesday 17 September 2025 16:14:06 +0000 (0:00:08.931) 0:00:36.987 *** 2025-09-17 16:16:11.661176 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:16:11.661186 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:16:11.661197 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:16:11.661207 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:16:11.661217 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:16:11.661227 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:16:11.661238 | orchestrator | 2025-09-17 16:16:11.661249 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-17 16:16:11.661311 | orchestrator | Wednesday 17 September 2025 16:14:09 +0000 (0:00:02.525) 0:00:39.513 *** 2025-09-17 16:16:11.661330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.661342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.661354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-17 16:16:11.661371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-17 16:16:11.661444 | orchestrator | 2025-09-17 16:16:11.661452 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.661460 | orchestrator | Wednesday 17 September 2025 16:14:11 +0000 (0:00:02.427) 0:00:41.940 *** 2025-09-17 16:16:11.661467 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.661475 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.661482 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.661490 | orchestrator | 2025-09-17 16:16:11.661498 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-17 16:16:11.661505 | orchestrator | Wednesday 17 September 2025 16:14:12 +0000 (0:00:00.274) 0:00:42.215 *** 2025-09-17 16:16:11.661513 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661521 | orchestrator | 2025-09-17 16:16:11.661528 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-17 16:16:11.661536 | orchestrator | Wednesday 17 September 2025 16:14:14 +0000 (0:00:02.344) 0:00:44.560 *** 2025-09-17 16:16:11.661544 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661551 | orchestrator | 2025-09-17 16:16:11.661559 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-17 16:16:11.661567 | orchestrator | Wednesday 17 September 2025 16:14:16 +0000 (0:00:02.221) 0:00:46.782 *** 2025-09-17 16:16:11.661574 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.661582 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.661590 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.661597 | orchestrator | 2025-09-17 16:16:11.661605 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-17 16:16:11.661613 | orchestrator | Wednesday 17 September 2025 16:14:17 +0000 (0:00:01.062) 0:00:47.844 *** 2025-09-17 16:16:11.661620 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.661628 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.661635 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.661643 | orchestrator | 2025-09-17 16:16:11.661655 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-17 16:16:11.661666 | orchestrator | Wednesday 17 September 2025 16:14:18 +0000 (0:00:00.309) 0:00:48.153 *** 2025-09-17 16:16:11.661674 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.661682 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.661690 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.661697 | orchestrator | 2025-09-17 16:16:11.661705 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-17 16:16:11.661713 | orchestrator | Wednesday 17 September 2025 16:14:18 +0000 (0:00:00.369) 0:00:48.523 *** 2025-09-17 16:16:11.661720 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661728 | orchestrator | 2025-09-17 16:16:11.661735 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-17 16:16:11.661743 | orchestrator | Wednesday 17 September 2025 16:14:32 +0000 (0:00:13.714) 0:01:02.237 *** 2025-09-17 16:16:11.661751 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661759 | orchestrator | 2025-09-17 16:16:11.661766 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 16:16:11.661774 | orchestrator | Wednesday 17 September 2025 16:14:42 +0000 (0:00:10.502) 0:01:12.740 *** 2025-09-17 16:16:11.661782 | orchestrator | 2025-09-17 16:16:11.661789 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 16:16:11.661797 | orchestrator | Wednesday 17 September 2025 16:14:42 +0000 (0:00:00.060) 0:01:12.800 *** 2025-09-17 16:16:11.661804 | orchestrator | 2025-09-17 16:16:11.661812 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-17 16:16:11.661824 | orchestrator | Wednesday 17 September 2025 16:14:42 +0000 (0:00:00.214) 0:01:13.014 *** 2025-09-17 16:16:11.661832 | orchestrator | 2025-09-17 16:16:11.661839 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-17 16:16:11.661847 | orchestrator | Wednesday 17 September 2025 16:14:43 +0000 (0:00:00.063) 0:01:13.078 *** 2025-09-17 16:16:11.661855 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661862 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:16:11.661870 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:16:11.661878 | orchestrator | 2025-09-17 16:16:11.661885 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-17 16:16:11.661893 | orchestrator | Wednesday 17 September 2025 16:15:05 +0000 (0:00:22.234) 0:01:35.313 *** 2025-09-17 16:16:11.661901 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661908 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:16:11.661916 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:16:11.661924 | orchestrator | 2025-09-17 16:16:11.661931 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-17 16:16:11.661939 | orchestrator | Wednesday 17 September 2025 16:15:10 +0000 (0:00:04.808) 0:01:40.121 *** 2025-09-17 16:16:11.661947 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.661955 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:16:11.661962 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:16:11.661970 | orchestrator | 2025-09-17 16:16:11.661978 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.661985 | orchestrator | Wednesday 17 September 2025 16:15:21 +0000 (0:00:11.371) 0:01:51.493 *** 2025-09-17 16:16:11.661993 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:16:11.662001 | orchestrator | 2025-09-17 16:16:11.662008 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-17 16:16:11.662040 | orchestrator | Wednesday 17 September 2025 16:15:22 +0000 (0:00:00.603) 0:01:52.096 *** 2025-09-17 16:16:11.662051 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:16:11.662058 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.662066 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:16:11.662074 | orchestrator | 2025-09-17 16:16:11.662081 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-17 16:16:11.662089 | orchestrator | Wednesday 17 September 2025 16:15:22 +0000 (0:00:00.832) 0:01:52.929 *** 2025-09-17 16:16:11.662097 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:16:11.662104 | orchestrator | 2025-09-17 16:16:11.662112 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-17 16:16:11.662120 | orchestrator | Wednesday 17 September 2025 16:15:24 +0000 (0:00:01.792) 0:01:54.722 *** 2025-09-17 16:16:11.662128 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-17 16:16:11.662135 | orchestrator | 2025-09-17 16:16:11.662143 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-17 16:16:11.662151 | orchestrator | Wednesday 17 September 2025 16:15:35 +0000 (0:00:11.097) 0:02:05.820 *** 2025-09-17 16:16:11.662158 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-17 16:16:11.662166 | orchestrator | 2025-09-17 16:16:11.662174 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-17 16:16:11.662181 | orchestrator | Wednesday 17 September 2025 16:15:58 +0000 (0:00:22.554) 0:02:28.374 *** 2025-09-17 16:16:11.662189 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-17 16:16:11.662196 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-17 16:16:11.662204 | orchestrator | 2025-09-17 16:16:11.662212 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-17 16:16:11.662219 | orchestrator | Wednesday 17 September 2025 16:16:05 +0000 (0:00:06.905) 0:02:35.280 *** 2025-09-17 16:16:11.662232 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.662240 | orchestrator | 2025-09-17 16:16:11.662247 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-17 16:16:11.662255 | orchestrator | Wednesday 17 September 2025 16:16:05 +0000 (0:00:00.129) 0:02:35.410 *** 2025-09-17 16:16:11.662263 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.662286 | orchestrator | 2025-09-17 16:16:11.662294 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-17 16:16:11.662302 | orchestrator | Wednesday 17 September 2025 16:16:05 +0000 (0:00:00.302) 0:02:35.713 *** 2025-09-17 16:16:11.662309 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.662317 | orchestrator | 2025-09-17 16:16:11.662330 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-17 16:16:11.662345 | orchestrator | Wednesday 17 September 2025 16:16:05 +0000 (0:00:00.111) 0:02:35.824 *** 2025-09-17 16:16:11.662354 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.662361 | orchestrator | 2025-09-17 16:16:11.662369 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-17 16:16:11.662377 | orchestrator | Wednesday 17 September 2025 16:16:06 +0000 (0:00:00.316) 0:02:36.141 *** 2025-09-17 16:16:11.662384 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:16:11.662392 | orchestrator | 2025-09-17 16:16:11.662400 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-17 16:16:11.662407 | orchestrator | Wednesday 17 September 2025 16:16:09 +0000 (0:00:03.131) 0:02:39.272 *** 2025-09-17 16:16:11.662415 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:16:11.662423 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:16:11.662430 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:16:11.662438 | orchestrator | 2025-09-17 16:16:11.662446 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:16:11.662454 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-17 16:16:11.662462 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-17 16:16:11.662470 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-17 16:16:11.662478 | orchestrator | 2025-09-17 16:16:11.662486 | orchestrator | 2025-09-17 16:16:11.662493 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:16:11.662501 | orchestrator | Wednesday 17 September 2025 16:16:09 +0000 (0:00:00.414) 0:02:39.687 *** 2025-09-17 16:16:11.662509 | orchestrator | =============================================================================== 2025-09-17 16:16:11.662516 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.55s 2025-09-17 16:16:11.662524 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.23s 2025-09-17 16:16:11.662532 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.71s 2025-09-17 16:16:11.662539 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.37s 2025-09-17 16:16:11.662547 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.10s 2025-09-17 16:16:11.662555 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.50s 2025-09-17 16:16:11.662562 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.93s 2025-09-17 16:16:11.662570 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.91s 2025-09-17 16:16:11.662578 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.81s 2025-09-17 16:16:11.662585 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.72s 2025-09-17 16:16:11.662593 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.40s 2025-09-17 16:16:11.662605 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.33s 2025-09-17 16:16:11.662613 | orchestrator | keystone : Creating default user role ----------------------------------- 3.13s 2025-09-17 16:16:11.662621 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.53s 2025-09-17 16:16:11.662629 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.43s 2025-09-17 16:16:11.662636 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.34s 2025-09-17 16:16:11.662644 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.30s 2025-09-17 16:16:11.662652 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.22s 2025-09-17 16:16:11.662659 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2025-09-17 16:16:11.662667 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.75s 2025-09-17 16:16:11.662675 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:11.662683 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:11.662690 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:11.662698 | orchestrator | 2025-09-17 16:16:11 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state STARTED 2025-09-17 16:16:11.662706 | orchestrator | 2025-09-17 16:16:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:14.690647 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:14.690729 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:14.690742 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:14.690753 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:14.690778 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task 76c01473-4385-48b1-bc30-29ab3fc22a8d is in state SUCCESS 2025-09-17 16:16:14.690788 | orchestrator | 2025-09-17 16:16:14 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:14.690799 | orchestrator | 2025-09-17 16:16:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:17.714628 | orchestrator | 2025-09-17 16:16:17 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:17.715854 | orchestrator | 2025-09-17 16:16:17 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:17.717226 | orchestrator | 2025-09-17 16:16:17 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:17.718516 | orchestrator | 2025-09-17 16:16:17 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:17.719906 | orchestrator | 2025-09-17 16:16:17 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:17.720157 | orchestrator | 2025-09-17 16:16:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:20.752893 | orchestrator | 2025-09-17 16:16:20 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:20.753606 | orchestrator | 2025-09-17 16:16:20 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:20.756317 | orchestrator | 2025-09-17 16:16:20 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:20.758216 | orchestrator | 2025-09-17 16:16:20 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:20.759488 | orchestrator | 2025-09-17 16:16:20 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:20.759518 | orchestrator | 2025-09-17 16:16:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:23.798420 | orchestrator | 2025-09-17 16:16:23 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:23.799515 | orchestrator | 2025-09-17 16:16:23 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:23.801510 | orchestrator | 2025-09-17 16:16:23 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:23.802935 | orchestrator | 2025-09-17 16:16:23 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:23.804654 | orchestrator | 2025-09-17 16:16:23 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:23.804756 | orchestrator | 2025-09-17 16:16:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:26.853467 | orchestrator | 2025-09-17 16:16:26 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:26.854779 | orchestrator | 2025-09-17 16:16:26 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:26.856415 | orchestrator | 2025-09-17 16:16:26 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:26.857621 | orchestrator | 2025-09-17 16:16:26 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:26.859214 | orchestrator | 2025-09-17 16:16:26 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:26.859247 | orchestrator | 2025-09-17 16:16:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:29.898423 | orchestrator | 2025-09-17 16:16:29 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:29.901056 | orchestrator | 2025-09-17 16:16:29 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:29.902871 | orchestrator | 2025-09-17 16:16:29 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:29.904643 | orchestrator | 2025-09-17 16:16:29 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:29.906261 | orchestrator | 2025-09-17 16:16:29 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:29.906302 | orchestrator | 2025-09-17 16:16:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:32.946797 | orchestrator | 2025-09-17 16:16:32 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:32.948469 | orchestrator | 2025-09-17 16:16:32 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:32.951096 | orchestrator | 2025-09-17 16:16:32 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:32.952721 | orchestrator | 2025-09-17 16:16:32 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:32.954609 | orchestrator | 2025-09-17 16:16:32 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:32.954878 | orchestrator | 2025-09-17 16:16:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:35.992666 | orchestrator | 2025-09-17 16:16:35 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:35.993805 | orchestrator | 2025-09-17 16:16:35 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:35.996107 | orchestrator | 2025-09-17 16:16:35 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:35.998055 | orchestrator | 2025-09-17 16:16:35 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:35.999476 | orchestrator | 2025-09-17 16:16:35 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:35.999803 | orchestrator | 2025-09-17 16:16:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:39.040902 | orchestrator | 2025-09-17 16:16:39 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:39.042551 | orchestrator | 2025-09-17 16:16:39 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:39.044944 | orchestrator | 2025-09-17 16:16:39 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:39.047339 | orchestrator | 2025-09-17 16:16:39 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:39.048940 | orchestrator | 2025-09-17 16:16:39 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:39.049251 | orchestrator | 2025-09-17 16:16:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:42.088941 | orchestrator | 2025-09-17 16:16:42 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:42.089192 | orchestrator | 2025-09-17 16:16:42 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:42.090852 | orchestrator | 2025-09-17 16:16:42 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:42.092095 | orchestrator | 2025-09-17 16:16:42 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:42.093004 | orchestrator | 2025-09-17 16:16:42 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:42.093151 | orchestrator | 2025-09-17 16:16:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:45.138468 | orchestrator | 2025-09-17 16:16:45 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:45.141609 | orchestrator | 2025-09-17 16:16:45 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:45.144679 | orchestrator | 2025-09-17 16:16:45 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:45.146823 | orchestrator | 2025-09-17 16:16:45 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:45.149440 | orchestrator | 2025-09-17 16:16:45 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:45.149474 | orchestrator | 2025-09-17 16:16:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:48.191672 | orchestrator | 2025-09-17 16:16:48 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:48.192614 | orchestrator | 2025-09-17 16:16:48 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:48.194191 | orchestrator | 2025-09-17 16:16:48 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:48.195130 | orchestrator | 2025-09-17 16:16:48 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:48.197443 | orchestrator | 2025-09-17 16:16:48 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:48.197474 | orchestrator | 2025-09-17 16:16:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:51.234754 | orchestrator | 2025-09-17 16:16:51 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:51.235363 | orchestrator | 2025-09-17 16:16:51 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:51.236243 | orchestrator | 2025-09-17 16:16:51 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:51.237195 | orchestrator | 2025-09-17 16:16:51 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:51.238090 | orchestrator | 2025-09-17 16:16:51 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:51.238118 | orchestrator | 2025-09-17 16:16:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:54.274770 | orchestrator | 2025-09-17 16:16:54 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:54.275666 | orchestrator | 2025-09-17 16:16:54 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:54.276989 | orchestrator | 2025-09-17 16:16:54 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:54.278116 | orchestrator | 2025-09-17 16:16:54 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:54.279922 | orchestrator | 2025-09-17 16:16:54 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:54.279947 | orchestrator | 2025-09-17 16:16:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:16:57.340613 | orchestrator | 2025-09-17 16:16:57 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state STARTED 2025-09-17 16:16:57.340692 | orchestrator | 2025-09-17 16:16:57 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:16:57.340705 | orchestrator | 2025-09-17 16:16:57 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:16:57.340716 | orchestrator | 2025-09-17 16:16:57 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:16:57.340727 | orchestrator | 2025-09-17 16:16:57 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:16:57.340737 | orchestrator | 2025-09-17 16:16:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:00.333713 | orchestrator | 2025-09-17 16:17:00.333804 | orchestrator | 2025-09-17 16:17:00.333819 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-17 16:17:00.333831 | orchestrator | 2025-09-17 16:17:00.333843 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-17 16:17:00.333853 | orchestrator | Wednesday 17 September 2025 16:15:17 +0000 (0:00:00.204) 0:00:00.204 *** 2025-09-17 16:17:00.333865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-17 16:17:00.333877 | orchestrator | 2025-09-17 16:17:00.333888 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-17 16:17:00.333899 | orchestrator | Wednesday 17 September 2025 16:15:17 +0000 (0:00:00.175) 0:00:00.379 *** 2025-09-17 16:17:00.333910 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-17 16:17:00.333920 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-17 16:17:00.333932 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-17 16:17:00.333942 | orchestrator | 2025-09-17 16:17:00.333953 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-17 16:17:00.333964 | orchestrator | Wednesday 17 September 2025 16:15:18 +0000 (0:00:01.014) 0:00:01.394 *** 2025-09-17 16:17:00.333975 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-17 16:17:00.334007 | orchestrator | 2025-09-17 16:17:00.334076 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-17 16:17:00.334089 | orchestrator | Wednesday 17 September 2025 16:15:19 +0000 (0:00:00.992) 0:00:02.387 *** 2025-09-17 16:17:00.334100 | orchestrator | changed: [testbed-manager] 2025-09-17 16:17:00.334111 | orchestrator | 2025-09-17 16:17:00.334122 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-17 16:17:00.334132 | orchestrator | Wednesday 17 September 2025 16:15:20 +0000 (0:00:00.831) 0:00:03.218 *** 2025-09-17 16:17:00.334143 | orchestrator | changed: [testbed-manager] 2025-09-17 16:17:00.334153 | orchestrator | 2025-09-17 16:17:00.334164 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-17 16:17:00.334175 | orchestrator | Wednesday 17 September 2025 16:15:21 +0000 (0:00:00.780) 0:00:03.999 *** 2025-09-17 16:17:00.334185 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-17 16:17:00.334280 | orchestrator | ok: [testbed-manager] 2025-09-17 16:17:00.334322 | orchestrator | 2025-09-17 16:17:00.334342 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-17 16:17:00.334354 | orchestrator | Wednesday 17 September 2025 16:16:03 +0000 (0:00:41.657) 0:00:45.656 *** 2025-09-17 16:17:00.334364 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-17 16:17:00.334375 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-17 16:17:00.334386 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-17 16:17:00.334397 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-17 16:17:00.334407 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-17 16:17:00.334418 | orchestrator | 2025-09-17 16:17:00.334440 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-17 16:17:00.334452 | orchestrator | Wednesday 17 September 2025 16:16:06 +0000 (0:00:03.873) 0:00:49.529 *** 2025-09-17 16:17:00.334462 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-17 16:17:00.334473 | orchestrator | 2025-09-17 16:17:00.334483 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-17 16:17:00.334494 | orchestrator | Wednesday 17 September 2025 16:16:07 +0000 (0:00:00.436) 0:00:49.965 *** 2025-09-17 16:17:00.334504 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:17:00.334515 | orchestrator | 2025-09-17 16:17:00.334525 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-17 16:17:00.334536 | orchestrator | Wednesday 17 September 2025 16:16:07 +0000 (0:00:00.126) 0:00:50.092 *** 2025-09-17 16:17:00.334547 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:17:00.334557 | orchestrator | 2025-09-17 16:17:00.334568 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-17 16:17:00.334579 | orchestrator | Wednesday 17 September 2025 16:16:07 +0000 (0:00:00.294) 0:00:50.387 *** 2025-09-17 16:17:00.334589 | orchestrator | changed: [testbed-manager] 2025-09-17 16:17:00.334600 | orchestrator | 2025-09-17 16:17:00.334610 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-17 16:17:00.334621 | orchestrator | Wednesday 17 September 2025 16:16:09 +0000 (0:00:01.571) 0:00:51.959 *** 2025-09-17 16:17:00.334631 | orchestrator | changed: [testbed-manager] 2025-09-17 16:17:00.334643 | orchestrator | 2025-09-17 16:17:00.334662 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-17 16:17:00.334678 | orchestrator | Wednesday 17 September 2025 16:16:10 +0000 (0:00:00.742) 0:00:52.701 *** 2025-09-17 16:17:00.334696 | orchestrator | changed: [testbed-manager] 2025-09-17 16:17:00.334713 | orchestrator | 2025-09-17 16:17:00.334733 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-17 16:17:00.334752 | orchestrator | Wednesday 17 September 2025 16:16:11 +0000 (0:00:00.994) 0:00:53.696 *** 2025-09-17 16:17:00.334769 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-17 16:17:00.334779 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-17 16:17:00.334801 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-17 16:17:00.334812 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-17 16:17:00.334823 | orchestrator | 2025-09-17 16:17:00.334833 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:17:00.334844 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:17:00.334856 | orchestrator | 2025-09-17 16:17:00.334866 | orchestrator | 2025-09-17 16:17:00.334894 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:17:00.334906 | orchestrator | Wednesday 17 September 2025 16:16:12 +0000 (0:00:01.650) 0:00:55.347 *** 2025-09-17 16:17:00.334917 | orchestrator | =============================================================================== 2025-09-17 16:17:00.334927 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.66s 2025-09-17 16:17:00.334938 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.87s 2025-09-17 16:17:00.334948 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2025-09-17 16:17:00.334959 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.57s 2025-09-17 16:17:00.334970 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.01s 2025-09-17 16:17:00.334980 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.99s 2025-09-17 16:17:00.334991 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.99s 2025-09-17 16:17:00.335001 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2025-09-17 16:17:00.335012 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.78s 2025-09-17 16:17:00.335022 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2025-09-17 16:17:00.335033 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-09-17 16:17:00.335046 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-17 16:17:00.335064 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.18s 2025-09-17 16:17:00.335082 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-17 16:17:00.335101 | orchestrator | 2025-09-17 16:17:00.335115 | orchestrator | 2025-09-17 16:17:00.335126 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-17 16:17:00.335136 | orchestrator | 2025-09-17 16:17:00.335147 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-17 16:17:00.335157 | orchestrator | Wednesday 17 September 2025 16:16:14 +0000 (0:00:00.163) 0:00:00.163 *** 2025-09-17 16:17:00.335168 | orchestrator | changed: [localhost] 2025-09-17 16:17:00.335179 | orchestrator | 2025-09-17 16:17:00.335189 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-17 16:17:00.335200 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.936) 0:00:01.099 *** 2025-09-17 16:17:00.335210 | orchestrator | changed: [localhost] 2025-09-17 16:17:00.335221 | orchestrator | 2025-09-17 16:17:00.335232 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-17 16:17:00.335242 | orchestrator | Wednesday 17 September 2025 16:16:44 +0000 (0:00:29.292) 0:00:30.391 *** 2025-09-17 16:17:00.335253 | orchestrator | changed: [localhost] 2025-09-17 16:17:00.335264 | orchestrator | 2025-09-17 16:17:00.335274 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:17:00.335285 | orchestrator | 2025-09-17 16:17:00.335296 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:17:00.335335 | orchestrator | Wednesday 17 September 2025 16:16:57 +0000 (0:00:12.685) 0:00:43.077 *** 2025-09-17 16:17:00.335346 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:17:00.335364 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:17:00.335375 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:17:00.335405 | orchestrator | 2025-09-17 16:17:00.335424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:17:00.335435 | orchestrator | Wednesday 17 September 2025 16:16:57 +0000 (0:00:00.413) 0:00:43.491 *** 2025-09-17 16:17:00.335445 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-17 16:17:00.335456 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-17 16:17:00.335467 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-17 16:17:00.335477 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-17 16:17:00.335488 | orchestrator | 2025-09-17 16:17:00.335498 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-17 16:17:00.335509 | orchestrator | skipping: no hosts matched 2025-09-17 16:17:00.335519 | orchestrator | 2025-09-17 16:17:00.335637 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:17:00.335663 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:17:00.335682 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:17:00.335694 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:17:00.335705 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:17:00.335715 | orchestrator | 2025-09-17 16:17:00.335726 | orchestrator | 2025-09-17 16:17:00.335736 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:17:00.335747 | orchestrator | Wednesday 17 September 2025 16:16:58 +0000 (0:00:00.888) 0:00:44.380 *** 2025-09-17 16:17:00.335758 | orchestrator | =============================================================================== 2025-09-17 16:17:00.335768 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.29s 2025-09-17 16:17:00.335779 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.69s 2025-09-17 16:17:00.335789 | orchestrator | Ensure the destination directory exists --------------------------------- 0.94s 2025-09-17 16:17:00.335800 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-09-17 16:17:00.335820 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-09-17 16:17:00.335831 | orchestrator | 2025-09-17 16:17:00 | INFO  | Task f67aaf0f-fa2f-4a46-a61c-e432f05d2c00 is in state SUCCESS 2025-09-17 16:17:00.335842 | orchestrator | 2025-09-17 16:17:00 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:00.335853 | orchestrator | 2025-09-17 16:17:00 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:00.335863 | orchestrator | 2025-09-17 16:17:00 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:00.339030 | orchestrator | 2025-09-17 16:17:00 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:00.339062 | orchestrator | 2025-09-17 16:17:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:03.361257 | orchestrator | 2025-09-17 16:17:03 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:03.361703 | orchestrator | 2025-09-17 16:17:03 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:03.363501 | orchestrator | 2025-09-17 16:17:03 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:03.364314 | orchestrator | 2025-09-17 16:17:03 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:03.365613 | orchestrator | 2025-09-17 16:17:03 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:03.365638 | orchestrator | 2025-09-17 16:17:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:06.396263 | orchestrator | 2025-09-17 16:17:06 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:06.396394 | orchestrator | 2025-09-17 16:17:06 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:06.396411 | orchestrator | 2025-09-17 16:17:06 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:06.396423 | orchestrator | 2025-09-17 16:17:06 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:06.396434 | orchestrator | 2025-09-17 16:17:06 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:06.396445 | orchestrator | 2025-09-17 16:17:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:09.421031 | orchestrator | 2025-09-17 16:17:09 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:09.421514 | orchestrator | 2025-09-17 16:17:09 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:09.422378 | orchestrator | 2025-09-17 16:17:09 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:09.423260 | orchestrator | 2025-09-17 16:17:09 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:09.424044 | orchestrator | 2025-09-17 16:17:09 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:09.424200 | orchestrator | 2025-09-17 16:17:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:12.449241 | orchestrator | 2025-09-17 16:17:12 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:12.450095 | orchestrator | 2025-09-17 16:17:12 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:12.451574 | orchestrator | 2025-09-17 16:17:12 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:12.452275 | orchestrator | 2025-09-17 16:17:12 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:12.453082 | orchestrator | 2025-09-17 16:17:12 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:12.453268 | orchestrator | 2025-09-17 16:17:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:15.478182 | orchestrator | 2025-09-17 16:17:15 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:15.478425 | orchestrator | 2025-09-17 16:17:15 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:15.479030 | orchestrator | 2025-09-17 16:17:15 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:15.479792 | orchestrator | 2025-09-17 16:17:15 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:15.480534 | orchestrator | 2025-09-17 16:17:15 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:15.480556 | orchestrator | 2025-09-17 16:17:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:18.510225 | orchestrator | 2025-09-17 16:17:18 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:18.510458 | orchestrator | 2025-09-17 16:17:18 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:18.511060 | orchestrator | 2025-09-17 16:17:18 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:18.512503 | orchestrator | 2025-09-17 16:17:18 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:18.513090 | orchestrator | 2025-09-17 16:17:18 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:18.513112 | orchestrator | 2025-09-17 16:17:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:21.539556 | orchestrator | 2025-09-17 16:17:21 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:21.539739 | orchestrator | 2025-09-17 16:17:21 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:21.540677 | orchestrator | 2025-09-17 16:17:21 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:21.541666 | orchestrator | 2025-09-17 16:17:21 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:21.542519 | orchestrator | 2025-09-17 16:17:21 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:21.542646 | orchestrator | 2025-09-17 16:17:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:24.566088 | orchestrator | 2025-09-17 16:17:24 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:24.566168 | orchestrator | 2025-09-17 16:17:24 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:24.566183 | orchestrator | 2025-09-17 16:17:24 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:24.566526 | orchestrator | 2025-09-17 16:17:24 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:24.567164 | orchestrator | 2025-09-17 16:17:24 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:24.567185 | orchestrator | 2025-09-17 16:17:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:27.590099 | orchestrator | 2025-09-17 16:17:27 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:27.590191 | orchestrator | 2025-09-17 16:17:27 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:27.590561 | orchestrator | 2025-09-17 16:17:27 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:27.591191 | orchestrator | 2025-09-17 16:17:27 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:27.591734 | orchestrator | 2025-09-17 16:17:27 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:27.591754 | orchestrator | 2025-09-17 16:17:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:30.621019 | orchestrator | 2025-09-17 16:17:30 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:30.621221 | orchestrator | 2025-09-17 16:17:30 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:30.621936 | orchestrator | 2025-09-17 16:17:30 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:30.622512 | orchestrator | 2025-09-17 16:17:30 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:30.623436 | orchestrator | 2025-09-17 16:17:30 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:30.623489 | orchestrator | 2025-09-17 16:17:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:33.646722 | orchestrator | 2025-09-17 16:17:33 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:33.646832 | orchestrator | 2025-09-17 16:17:33 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:33.647179 | orchestrator | 2025-09-17 16:17:33 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:33.647764 | orchestrator | 2025-09-17 16:17:33 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:33.648225 | orchestrator | 2025-09-17 16:17:33 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:33.648307 | orchestrator | 2025-09-17 16:17:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:36.673689 | orchestrator | 2025-09-17 16:17:36 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:36.673899 | orchestrator | 2025-09-17 16:17:36 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:36.674480 | orchestrator | 2025-09-17 16:17:36 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:36.676144 | orchestrator | 2025-09-17 16:17:36 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:36.676747 | orchestrator | 2025-09-17 16:17:36 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:36.676823 | orchestrator | 2025-09-17 16:17:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:39.710938 | orchestrator | 2025-09-17 16:17:39 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:39.711406 | orchestrator | 2025-09-17 16:17:39 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:39.712162 | orchestrator | 2025-09-17 16:17:39 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:39.713480 | orchestrator | 2025-09-17 16:17:39 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:39.713816 | orchestrator | 2025-09-17 16:17:39 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:39.713957 | orchestrator | 2025-09-17 16:17:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:42.740022 | orchestrator | 2025-09-17 16:17:42 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:42.740297 | orchestrator | 2025-09-17 16:17:42 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:42.741053 | orchestrator | 2025-09-17 16:17:42 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:42.741527 | orchestrator | 2025-09-17 16:17:42 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:42.742252 | orchestrator | 2025-09-17 16:17:42 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:42.742290 | orchestrator | 2025-09-17 16:17:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:45.766469 | orchestrator | 2025-09-17 16:17:45 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:45.767350 | orchestrator | 2025-09-17 16:17:45 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:45.768626 | orchestrator | 2025-09-17 16:17:45 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:45.769471 | orchestrator | 2025-09-17 16:17:45 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:45.770455 | orchestrator | 2025-09-17 16:17:45 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:45.770514 | orchestrator | 2025-09-17 16:17:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:48.798421 | orchestrator | 2025-09-17 16:17:48 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:48.800802 | orchestrator | 2025-09-17 16:17:48 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:48.801364 | orchestrator | 2025-09-17 16:17:48 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:48.801847 | orchestrator | 2025-09-17 16:17:48 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state STARTED 2025-09-17 16:17:48.802697 | orchestrator | 2025-09-17 16:17:48 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:48.802774 | orchestrator | 2025-09-17 16:17:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:51.825707 | orchestrator | 2025-09-17 16:17:51 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:51.827585 | orchestrator | 2025-09-17 16:17:51 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:51.828321 | orchestrator | 2025-09-17 16:17:51 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:51.828942 | orchestrator | 2025-09-17 16:17:51 | INFO  | Task 7634b541-aff0-4d3b-9750-8dae4b4e4883 is in state SUCCESS 2025-09-17 16:17:51.829707 | orchestrator | 2025-09-17 16:17:51 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:51.829728 | orchestrator | 2025-09-17 16:17:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:54.884067 | orchestrator | 2025-09-17 16:17:54 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:54.884132 | orchestrator | 2025-09-17 16:17:54 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:54.884143 | orchestrator | 2025-09-17 16:17:54 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:54.884153 | orchestrator | 2025-09-17 16:17:54 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:54.884162 | orchestrator | 2025-09-17 16:17:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:17:57.915186 | orchestrator | 2025-09-17 16:17:57 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:17:57.917805 | orchestrator | 2025-09-17 16:17:57 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:17:57.921189 | orchestrator | 2025-09-17 16:17:57 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:17:57.921920 | orchestrator | 2025-09-17 16:17:57 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:17:57.921946 | orchestrator | 2025-09-17 16:17:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:00.946948 | orchestrator | 2025-09-17 16:18:00 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:00.947836 | orchestrator | 2025-09-17 16:18:00 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:00.949189 | orchestrator | 2025-09-17 16:18:00 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:18:00.950125 | orchestrator | 2025-09-17 16:18:00 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:00.950156 | orchestrator | 2025-09-17 16:18:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:03.980142 | orchestrator | 2025-09-17 16:18:03 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:03.981715 | orchestrator | 2025-09-17 16:18:03 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:03.981928 | orchestrator | 2025-09-17 16:18:03 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:18:03.982798 | orchestrator | 2025-09-17 16:18:03 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:03.982828 | orchestrator | 2025-09-17 16:18:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:07.008978 | orchestrator | 2025-09-17 16:18:07 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:07.009392 | orchestrator | 2025-09-17 16:18:07 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:07.009995 | orchestrator | 2025-09-17 16:18:07 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:18:07.010664 | orchestrator | 2025-09-17 16:18:07 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:07.010690 | orchestrator | 2025-09-17 16:18:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:10.031679 | orchestrator | 2025-09-17 16:18:10 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:10.033564 | orchestrator | 2025-09-17 16:18:10 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:10.036466 | orchestrator | 2025-09-17 16:18:10 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:18:10.036950 | orchestrator | 2025-09-17 16:18:10 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:10.036981 | orchestrator | 2025-09-17 16:18:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:13.070610 | orchestrator | 2025-09-17 16:18:13 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:13.071091 | orchestrator | 2025-09-17 16:18:13 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:13.071693 | orchestrator | 2025-09-17 16:18:13 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state STARTED 2025-09-17 16:18:13.072481 | orchestrator | 2025-09-17 16:18:13 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:13.072555 | orchestrator | 2025-09-17 16:18:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:16.094470 | orchestrator | 2025-09-17 16:18:16 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:16.095367 | orchestrator | 2025-09-17 16:18:16 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:16.096661 | orchestrator | 2025-09-17 16:18:16 | INFO  | Task 7b187806-af1b-476d-bec7-bfb09ace5037 is in state SUCCESS 2025-09-17 16:18:16.098425 | orchestrator | 2025-09-17 16:18:16.098522 | orchestrator | 2025-09-17 16:18:16.098536 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-17 16:18:16.098548 | orchestrator | 2025-09-17 16:18:16.098559 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-17 16:18:16.098570 | orchestrator | Wednesday 17 September 2025 16:16:16 +0000 (0:00:00.202) 0:00:00.202 *** 2025-09-17 16:18:16.098581 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098593 | orchestrator | 2025-09-17 16:18:16.098604 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-17 16:18:16.098615 | orchestrator | Wednesday 17 September 2025 16:16:17 +0000 (0:00:01.660) 0:00:01.862 *** 2025-09-17 16:18:16.098626 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098659 | orchestrator | 2025-09-17 16:18:16.098671 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-17 16:18:16.098682 | orchestrator | Wednesday 17 September 2025 16:16:18 +0000 (0:00:00.934) 0:00:02.797 *** 2025-09-17 16:18:16.098692 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098703 | orchestrator | 2025-09-17 16:18:16.098729 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-17 16:18:16.098740 | orchestrator | Wednesday 17 September 2025 16:16:19 +0000 (0:00:00.911) 0:00:03.708 *** 2025-09-17 16:18:16.098837 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098849 | orchestrator | 2025-09-17 16:18:16.098860 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-17 16:18:16.098871 | orchestrator | Wednesday 17 September 2025 16:16:20 +0000 (0:00:01.018) 0:00:04.726 *** 2025-09-17 16:18:16.098882 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098892 | orchestrator | 2025-09-17 16:18:16.098903 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-17 16:18:16.098914 | orchestrator | Wednesday 17 September 2025 16:16:21 +0000 (0:00:00.942) 0:00:05.669 *** 2025-09-17 16:18:16.098924 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098952 | orchestrator | 2025-09-17 16:18:16.098963 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-17 16:18:16.098974 | orchestrator | Wednesday 17 September 2025 16:16:22 +0000 (0:00:01.126) 0:00:06.795 *** 2025-09-17 16:18:16.098985 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.098996 | orchestrator | 2025-09-17 16:18:16.099007 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-17 16:18:16.099017 | orchestrator | Wednesday 17 September 2025 16:16:24 +0000 (0:00:02.091) 0:00:08.886 *** 2025-09-17 16:18:16.099040 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.099060 | orchestrator | 2025-09-17 16:18:16.099072 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-17 16:18:16.099083 | orchestrator | Wednesday 17 September 2025 16:16:26 +0000 (0:00:01.182) 0:00:10.069 *** 2025-09-17 16:18:16.099094 | orchestrator | changed: [testbed-manager] 2025-09-17 16:18:16.099104 | orchestrator | 2025-09-17 16:18:16.099115 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-17 16:18:16.099126 | orchestrator | Wednesday 17 September 2025 16:17:25 +0000 (0:00:59.089) 0:01:09.158 *** 2025-09-17 16:18:16.099137 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:18:16.099148 | orchestrator | 2025-09-17 16:18:16.099158 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 16:18:16.099169 | orchestrator | 2025-09-17 16:18:16.099180 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 16:18:16.099190 | orchestrator | Wednesday 17 September 2025 16:17:25 +0000 (0:00:00.161) 0:01:09.320 *** 2025-09-17 16:18:16.099201 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.099212 | orchestrator | 2025-09-17 16:18:16.099223 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 16:18:16.099233 | orchestrator | 2025-09-17 16:18:16.099244 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 16:18:16.099255 | orchestrator | Wednesday 17 September 2025 16:17:36 +0000 (0:00:11.464) 0:01:20.784 *** 2025-09-17 16:18:16.099265 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:16.099276 | orchestrator | 2025-09-17 16:18:16.099287 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-17 16:18:16.099298 | orchestrator | 2025-09-17 16:18:16.099309 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-17 16:18:16.099319 | orchestrator | Wednesday 17 September 2025 16:17:48 +0000 (0:00:11.317) 0:01:32.102 *** 2025-09-17 16:18:16.099330 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:16.099398 | orchestrator | 2025-09-17 16:18:16.099413 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:18:16.099434 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-17 16:18:16.099447 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:18:16.099458 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:18:16.099469 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:18:16.099479 | orchestrator | 2025-09-17 16:18:16.099490 | orchestrator | 2025-09-17 16:18:16.099502 | orchestrator | 2025-09-17 16:18:16.099512 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:18:16.099523 | orchestrator | Wednesday 17 September 2025 16:17:49 +0000 (0:00:01.160) 0:01:33.262 *** 2025-09-17 16:18:16.099534 | orchestrator | =============================================================================== 2025-09-17 16:18:16.099545 | orchestrator | Create admin user ------------------------------------------------------ 59.09s 2025-09-17 16:18:16.099556 | orchestrator | Restart ceph manager service ------------------------------------------- 23.94s 2025-09-17 16:18:16.099580 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-09-17 16:18:16.099591 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.66s 2025-09-17 16:18:16.099602 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2025-09-17 16:18:16.099613 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2025-09-17 16:18:16.099623 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.02s 2025-09-17 16:18:16.099634 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.94s 2025-09-17 16:18:16.099644 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-09-17 16:18:16.099655 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2025-09-17 16:18:16.099666 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-09-17 16:18:16.099676 | orchestrator | 2025-09-17 16:18:16.099687 | orchestrator | 2025-09-17 16:18:16.099697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:18:16.099708 | orchestrator | 2025-09-17 16:18:16.099718 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:18:16.099729 | orchestrator | Wednesday 17 September 2025 16:16:14 +0000 (0:00:00.286) 0:00:00.286 *** 2025-09-17 16:18:16.099740 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:18:16.099751 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:18:16.099761 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:18:16.099772 | orchestrator | 2025-09-17 16:18:16.099783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:18:16.099793 | orchestrator | Wednesday 17 September 2025 16:16:14 +0000 (0:00:00.303) 0:00:00.590 *** 2025-09-17 16:18:16.099804 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-17 16:18:16.099815 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-17 16:18:16.099825 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-17 16:18:16.099836 | orchestrator | 2025-09-17 16:18:16.099847 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-17 16:18:16.099857 | orchestrator | 2025-09-17 16:18:16.099868 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 16:18:16.099879 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.529) 0:00:01.119 *** 2025-09-17 16:18:16.099895 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:18:16.099907 | orchestrator | 2025-09-17 16:18:16.099918 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-17 16:18:16.099934 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.437) 0:00:01.556 *** 2025-09-17 16:18:16.099943 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-17 16:18:16.099953 | orchestrator | 2025-09-17 16:18:16.099962 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-17 16:18:16.099971 | orchestrator | Wednesday 17 September 2025 16:16:19 +0000 (0:00:03.723) 0:00:05.280 *** 2025-09-17 16:18:16.099981 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-17 16:18:16.099991 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-17 16:18:16.100000 | orchestrator | 2025-09-17 16:18:16.100010 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-17 16:18:16.100020 | orchestrator | Wednesday 17 September 2025 16:16:25 +0000 (0:00:06.455) 0:00:11.735 *** 2025-09-17 16:18:16.100029 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-17 16:18:16.100038 | orchestrator | 2025-09-17 16:18:16.100048 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-17 16:18:16.100057 | orchestrator | Wednesday 17 September 2025 16:16:29 +0000 (0:00:03.657) 0:00:15.392 *** 2025-09-17 16:18:16.100067 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:18:16.100076 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-17 16:18:16.100085 | orchestrator | 2025-09-17 16:18:16.100095 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-17 16:18:16.100104 | orchestrator | Wednesday 17 September 2025 16:16:33 +0000 (0:00:04.339) 0:00:19.732 *** 2025-09-17 16:18:16.100114 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:18:16.100123 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-17 16:18:16.100132 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-17 16:18:16.100142 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-17 16:18:16.100151 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-17 16:18:16.100161 | orchestrator | 2025-09-17 16:18:16.100170 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-17 16:18:16.100180 | orchestrator | Wednesday 17 September 2025 16:16:51 +0000 (0:00:17.603) 0:00:37.336 *** 2025-09-17 16:18:16.100189 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-17 16:18:16.100198 | orchestrator | 2025-09-17 16:18:16.100208 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-17 16:18:16.100217 | orchestrator | Wednesday 17 September 2025 16:16:56 +0000 (0:00:05.285) 0:00:42.622 *** 2025-09-17 16:18:16.100236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100380 | orchestrator | 2025-09-17 16:18:16.100389 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-17 16:18:16.100399 | orchestrator | Wednesday 17 September 2025 16:16:58 +0000 (0:00:02.062) 0:00:44.684 *** 2025-09-17 16:18:16.100409 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-17 16:18:16.100418 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-17 16:18:16.100428 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-17 16:18:16.100437 | orchestrator | 2025-09-17 16:18:16.100447 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-17 16:18:16.100456 | orchestrator | Wednesday 17 September 2025 16:17:00 +0000 (0:00:01.337) 0:00:46.022 *** 2025-09-17 16:18:16.100466 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.100475 | orchestrator | 2025-09-17 16:18:16.100485 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-17 16:18:16.100494 | orchestrator | Wednesday 17 September 2025 16:17:00 +0000 (0:00:00.127) 0:00:46.149 *** 2025-09-17 16:18:16.100503 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.100513 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.100522 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.100532 | orchestrator | 2025-09-17 16:18:16.100541 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 16:18:16.100551 | orchestrator | Wednesday 17 September 2025 16:17:00 +0000 (0:00:00.359) 0:00:46.509 *** 2025-09-17 16:18:16.100560 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:18:16.100569 | orchestrator | 2025-09-17 16:18:16.100579 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-17 16:18:16.100588 | orchestrator | Wednesday 17 September 2025 16:17:01 +0000 (0:00:00.467) 0:00:46.976 *** 2025-09-17 16:18:16.100605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.100647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.100722 | orchestrator | 2025-09-17 16:18:16.100732 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-17 16:18:16.100742 | orchestrator | Wednesday 17 September 2025 16:17:04 +0000 (0:00:03.523) 0:00:50.500 *** 2025-09-17 16:18:16.100752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.100762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100799 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.100810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.100824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100844 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.100854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.100864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100894 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.100903 | orchestrator | 2025-09-17 16:18:16.100913 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-17 16:18:16.100923 | orchestrator | Wednesday 17 September 2025 16:17:05 +0000 (0:00:01.256) 0:00:51.756 *** 2025-09-17 16:18:16.100936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.100946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.100966 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.100976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.101250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101276 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.101291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.101301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101327 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.101337 | orchestrator | 2025-09-17 16:18:16.101364 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-17 16:18:16.101374 | orchestrator | Wednesday 17 September 2025 16:17:06 +0000 (0:00:00.795) 0:00:52.551 *** 2025-09-17 16:18:16.101391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101503 | orchestrator | 2025-09-17 16:18:16.101512 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-17 16:18:16.101522 | orchestrator | Wednesday 17 September 2025 16:17:10 +0000 (0:00:03.901) 0:00:56.453 *** 2025-09-17 16:18:16.101532 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.101541 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:16.101551 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:16.101561 | orchestrator | 2025-09-17 16:18:16.101576 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-17 16:18:16.101585 | orchestrator | Wednesday 17 September 2025 16:17:12 +0000 (0:00:01.976) 0:00:58.432 *** 2025-09-17 16:18:16.101595 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:18:16.101604 | orchestrator | 2025-09-17 16:18:16.101614 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-17 16:18:16.101623 | orchestrator | Wednesday 17 September 2025 16:17:13 +0000 (0:00:01.331) 0:00:59.764 *** 2025-09-17 16:18:16.101633 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.101642 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.101651 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.101661 | orchestrator | 2025-09-17 16:18:16.101670 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-17 16:18:16.101680 | orchestrator | Wednesday 17 September 2025 16:17:14 +0000 (0:00:00.520) 0:01:00.284 *** 2025-09-17 16:18:16.101690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.101731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.101803 | orchestrator | 2025-09-17 16:18:16.101813 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-17 16:18:16.101822 | orchestrator | Wednesday 17 September 2025 16:17:25 +0000 (0:00:10.885) 0:01:11.170 *** 2025-09-17 16:18:16.101836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.101851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101873 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.101891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.101904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101938 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.101949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-17 16:18:16.101960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:18:16.101987 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.101998 | orchestrator | 2025-09-17 16:18:16.102009 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-17 16:18:16.102095 | orchestrator | Wednesday 17 September 2025 16:17:26 +0000 (0:00:01.359) 0:01:12.530 *** 2025-09-17 16:18:16.102108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.102130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.102143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:16.102154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:18:16.102240 | orchestrator | 2025-09-17 16:18:16.102250 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-17 16:18:16.102259 | orchestrator | Wednesday 17 September 2025 16:17:29 +0000 (0:00:02.843) 0:01:15.374 *** 2025-09-17 16:18:16.102269 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:16.102278 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:16.102288 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:16.102297 | orchestrator | 2025-09-17 16:18:16.102307 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-17 16:18:16.102316 | orchestrator | Wednesday 17 September 2025 16:17:30 +0000 (0:00:00.567) 0:01:15.942 *** 2025-09-17 16:18:16.102325 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102335 | orchestrator | 2025-09-17 16:18:16.102366 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-17 16:18:16.102376 | orchestrator | Wednesday 17 September 2025 16:17:32 +0000 (0:00:02.279) 0:01:18.221 *** 2025-09-17 16:18:16.102385 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102395 | orchestrator | 2025-09-17 16:18:16.102404 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-17 16:18:16.102414 | orchestrator | Wednesday 17 September 2025 16:17:34 +0000 (0:00:02.323) 0:01:20.545 *** 2025-09-17 16:18:16.102423 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102432 | orchestrator | 2025-09-17 16:18:16.102442 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 16:18:16.102451 | orchestrator | Wednesday 17 September 2025 16:17:46 +0000 (0:00:11.587) 0:01:32.132 *** 2025-09-17 16:18:16.102461 | orchestrator | 2025-09-17 16:18:16.102470 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 16:18:16.102480 | orchestrator | Wednesday 17 September 2025 16:17:46 +0000 (0:00:00.142) 0:01:32.275 *** 2025-09-17 16:18:16.102489 | orchestrator | 2025-09-17 16:18:16.102504 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-17 16:18:16.102514 | orchestrator | Wednesday 17 September 2025 16:17:46 +0000 (0:00:00.175) 0:01:32.451 *** 2025-09-17 16:18:16.102523 | orchestrator | 2025-09-17 16:18:16.102538 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-17 16:18:16.102548 | orchestrator | Wednesday 17 September 2025 16:17:46 +0000 (0:00:00.158) 0:01:32.610 *** 2025-09-17 16:18:16.102557 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102566 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:16.102576 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:16.102585 | orchestrator | 2025-09-17 16:18:16.102595 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-17 16:18:16.102604 | orchestrator | Wednesday 17 September 2025 16:17:54 +0000 (0:00:07.300) 0:01:39.910 *** 2025-09-17 16:18:16.102614 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102623 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:16.102632 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:16.102642 | orchestrator | 2025-09-17 16:18:16.102651 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-17 16:18:16.102660 | orchestrator | Wednesday 17 September 2025 16:18:03 +0000 (0:00:09.691) 0:01:49.602 *** 2025-09-17 16:18:16.102670 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:16.102679 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:16.102688 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:16.102698 | orchestrator | 2025-09-17 16:18:16.102708 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:18:16.102718 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 16:18:16.102728 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:18:16.102737 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:18:16.102747 | orchestrator | 2025-09-17 16:18:16.102756 | orchestrator | 2025-09-17 16:18:16.102765 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:18:16.102779 | orchestrator | Wednesday 17 September 2025 16:18:14 +0000 (0:00:10.992) 0:02:00.594 *** 2025-09-17 16:18:16.102789 | orchestrator | =============================================================================== 2025-09-17 16:18:16.102798 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.60s 2025-09-17 16:18:16.102807 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.59s 2025-09-17 16:18:16.102817 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.99s 2025-09-17 16:18:16.102826 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.89s 2025-09-17 16:18:16.102836 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.69s 2025-09-17 16:18:16.102845 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.30s 2025-09-17 16:18:16.102854 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.46s 2025-09-17 16:18:16.102864 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.29s 2025-09-17 16:18:16.102873 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.34s 2025-09-17 16:18:16.102882 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.90s 2025-09-17 16:18:16.102892 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.72s 2025-09-17 16:18:16.102901 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.66s 2025-09-17 16:18:16.102910 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.52s 2025-09-17 16:18:16.102920 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.84s 2025-09-17 16:18:16.102929 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.32s 2025-09-17 16:18:16.102938 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2025-09-17 16:18:16.102953 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.06s 2025-09-17 16:18:16.102963 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.98s 2025-09-17 16:18:16.102972 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.36s 2025-09-17 16:18:16.102982 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.34s 2025-09-17 16:18:16.102991 | orchestrator | 2025-09-17 16:18:16 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state STARTED 2025-09-17 16:18:16.103001 | orchestrator | 2025-09-17 16:18:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:19.121274 | orchestrator | 2025-09-17 16:18:19 | INFO  | Task fe9f0427-211f-498f-9262-83214159417c is in state STARTED 2025-09-17 16:18:19.122666 | orchestrator | 2025-09-17 16:18:19 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:19.123313 | orchestrator | 2025-09-17 16:18:19 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:19.124077 | orchestrator | 2025-09-17 16:18:19 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:19.125193 | orchestrator | 2025-09-17 16:18:19 | INFO  | Task 0f9664ba-5f70-487f-9aa5-89bb082a9563 is in state SUCCESS 2025-09-17 16:18:19.126313 | orchestrator | 2025-09-17 16:18:19.126367 | orchestrator | 2025-09-17 16:18:19.126380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:18:19.126391 | orchestrator | 2025-09-17 16:18:19.126402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:18:19.126413 | orchestrator | Wednesday 17 September 2025 16:17:03 +0000 (0:00:00.488) 0:00:00.488 *** 2025-09-17 16:18:19.126424 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:18:19.126435 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:18:19.126446 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:18:19.126457 | orchestrator | 2025-09-17 16:18:19.126467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:18:19.126478 | orchestrator | Wednesday 17 September 2025 16:17:03 +0000 (0:00:00.804) 0:00:01.292 *** 2025-09-17 16:18:19.126489 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-17 16:18:19.126500 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-17 16:18:19.126511 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-17 16:18:19.126522 | orchestrator | 2025-09-17 16:18:19.126533 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-17 16:18:19.126543 | orchestrator | 2025-09-17 16:18:19.126554 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 16:18:19.126565 | orchestrator | Wednesday 17 September 2025 16:17:04 +0000 (0:00:00.928) 0:00:02.221 *** 2025-09-17 16:18:19.126575 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:18:19.126586 | orchestrator | 2025-09-17 16:18:19.126597 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-17 16:18:19.126608 | orchestrator | Wednesday 17 September 2025 16:17:05 +0000 (0:00:01.088) 0:00:03.309 *** 2025-09-17 16:18:19.126619 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-17 16:18:19.126629 | orchestrator | 2025-09-17 16:18:19.126640 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-17 16:18:19.126651 | orchestrator | Wednesday 17 September 2025 16:17:09 +0000 (0:00:03.668) 0:00:06.978 *** 2025-09-17 16:18:19.126661 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-17 16:18:19.126687 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-17 16:18:19.126698 | orchestrator | 2025-09-17 16:18:19.126728 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-17 16:18:19.126739 | orchestrator | Wednesday 17 September 2025 16:17:16 +0000 (0:00:06.822) 0:00:13.800 *** 2025-09-17 16:18:19.126749 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:18:19.126760 | orchestrator | 2025-09-17 16:18:19.126770 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-17 16:18:19.126781 | orchestrator | Wednesday 17 September 2025 16:17:19 +0000 (0:00:03.566) 0:00:17.367 *** 2025-09-17 16:18:19.126791 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:18:19.126802 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-17 16:18:19.126812 | orchestrator | 2025-09-17 16:18:19.126823 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-17 16:18:19.126833 | orchestrator | Wednesday 17 September 2025 16:17:24 +0000 (0:00:04.364) 0:00:21.731 *** 2025-09-17 16:18:19.126844 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:18:19.126855 | orchestrator | 2025-09-17 16:18:19.126865 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-17 16:18:19.126876 | orchestrator | Wednesday 17 September 2025 16:17:27 +0000 (0:00:03.512) 0:00:25.244 *** 2025-09-17 16:18:19.126886 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-17 16:18:19.126898 | orchestrator | 2025-09-17 16:18:19.126908 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 16:18:19.126921 | orchestrator | Wednesday 17 September 2025 16:17:31 +0000 (0:00:03.589) 0:00:28.834 *** 2025-09-17 16:18:19.126933 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.126945 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:19.126956 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:19.126968 | orchestrator | 2025-09-17 16:18:19.126981 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-17 16:18:19.126993 | orchestrator | Wednesday 17 September 2025 16:17:31 +0000 (0:00:00.238) 0:00:29.072 *** 2025-09-17 16:18:19.127008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127076 | orchestrator | 2025-09-17 16:18:19.127089 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-17 16:18:19.127101 | orchestrator | Wednesday 17 September 2025 16:17:32 +0000 (0:00:00.837) 0:00:29.910 *** 2025-09-17 16:18:19.127114 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.127126 | orchestrator | 2025-09-17 16:18:19.127138 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-17 16:18:19.127151 | orchestrator | Wednesday 17 September 2025 16:17:32 +0000 (0:00:00.202) 0:00:30.112 *** 2025-09-17 16:18:19.127162 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.127174 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:19.127186 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:19.127199 | orchestrator | 2025-09-17 16:18:19.127211 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-17 16:18:19.127223 | orchestrator | Wednesday 17 September 2025 16:17:33 +0000 (0:00:00.802) 0:00:30.914 *** 2025-09-17 16:18:19.127235 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:18:19.127248 | orchestrator | 2025-09-17 16:18:19.127260 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-17 16:18:19.127272 | orchestrator | Wednesday 17 September 2025 16:17:34 +0000 (0:00:00.530) 0:00:31.445 *** 2025-09-17 16:18:19.127283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127331 | orchestrator | 2025-09-17 16:18:19.127356 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-17 16:18:19.127368 | orchestrator | Wednesday 17 September 2025 16:17:35 +0000 (0:00:01.569) 0:00:33.014 *** 2025-09-17 16:18:19.127384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127395 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.127406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127417 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:19.127435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127458 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:19.127469 | orchestrator | 2025-09-17 16:18:19.127480 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-17 16:18:19.127490 | orchestrator | Wednesday 17 September 2025 16:17:37 +0000 (0:00:01.477) 0:00:34.492 *** 2025-09-17 16:18:19.127501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127512 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.127527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127538 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:19.127549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127560 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:19.127571 | orchestrator | 2025-09-17 16:18:19.127581 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-17 16:18:19.127592 | orchestrator | Wednesday 17 September 2025 16:17:37 +0000 (0:00:00.836) 0:00:35.328 *** 2025-09-17 16:18:19.127611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127656 | orchestrator | 2025-09-17 16:18:19.127667 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-17 16:18:19.127677 | orchestrator | Wednesday 17 September 2025 16:17:39 +0000 (0:00:01.338) 0:00:36.667 *** 2025-09-17 16:18:19.127688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.127735 | orchestrator | 2025-09-17 16:18:19.127746 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-17 16:18:19.127757 | orchestrator | Wednesday 17 September 2025 16:17:43 +0000 (0:00:03.787) 0:00:40.455 *** 2025-09-17 16:18:19.127767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 16:18:19.127778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 16:18:19.127789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-17 16:18:19.127799 | orchestrator | 2025-09-17 16:18:19.127810 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-17 16:18:19.127821 | orchestrator | Wednesday 17 September 2025 16:17:44 +0000 (0:00:01.567) 0:00:42.022 *** 2025-09-17 16:18:19.127836 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:19.127847 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:19.127857 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:19.127868 | orchestrator | 2025-09-17 16:18:19.127879 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-17 16:18:19.127889 | orchestrator | Wednesday 17 September 2025 16:17:46 +0000 (0:00:01.645) 0:00:43.667 *** 2025-09-17 16:18:19.127900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127911 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:18:19.127922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127939 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:18:19.127957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-17 16:18:19.127969 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:18:19.127980 | orchestrator | 2025-09-17 16:18:19.127990 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-17 16:18:19.128001 | orchestrator | Wednesday 17 September 2025 16:17:47 +0000 (0:00:01.149) 0:00:44.816 *** 2025-09-17 16:18:19.128016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.128028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.128040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-17 16:18:19.128061 | orchestrator | 2025-09-17 16:18:19.128072 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-17 16:18:19.128082 | orchestrator | Wednesday 17 September 2025 16:17:49 +0000 (0:00:02.329) 0:00:47.146 *** 2025-09-17 16:18:19.128093 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:19.128103 | orchestrator | 2025-09-17 16:18:19.128114 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-17 16:18:19.128125 | orchestrator | Wednesday 17 September 2025 16:17:51 +0000 (0:00:02.120) 0:00:49.268 *** 2025-09-17 16:18:19.128135 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:19.128146 | orchestrator | 2025-09-17 16:18:19.128156 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-17 16:18:19.128167 | orchestrator | Wednesday 17 September 2025 16:17:54 +0000 (0:00:02.341) 0:00:51.609 *** 2025-09-17 16:18:19.128183 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:19.128194 | orchestrator | 2025-09-17 16:18:19.128205 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 16:18:19.128216 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:14.283) 0:01:05.893 *** 2025-09-17 16:18:19.128226 | orchestrator | 2025-09-17 16:18:19.128237 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 16:18:19.128247 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.060) 0:01:05.953 *** 2025-09-17 16:18:19.128258 | orchestrator | 2025-09-17 16:18:19.128268 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-17 16:18:19.128279 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.058) 0:01:06.012 *** 2025-09-17 16:18:19.128289 | orchestrator | 2025-09-17 16:18:19.128300 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-17 16:18:19.128311 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.116) 0:01:06.128 *** 2025-09-17 16:18:19.128321 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:18:19.128331 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:18:19.128364 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:18:19.128376 | orchestrator | 2025-09-17 16:18:19.128386 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:18:19.128398 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:18:19.128409 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:18:19.128420 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:18:19.128430 | orchestrator | 2025-09-17 16:18:19.128441 | orchestrator | 2025-09-17 16:18:19.128452 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:18:19.128462 | orchestrator | Wednesday 17 September 2025 16:18:15 +0000 (0:00:07.167) 0:01:13.295 *** 2025-09-17 16:18:19.128477 | orchestrator | =============================================================================== 2025-09-17 16:18:19.128488 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.28s 2025-09-17 16:18:19.128507 | orchestrator | placement : Restart placement-api container ----------------------------- 7.17s 2025-09-17 16:18:19.128518 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.82s 2025-09-17 16:18:19.128528 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.36s 2025-09-17 16:18:19.128539 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.79s 2025-09-17 16:18:19.128549 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.67s 2025-09-17 16:18:19.128560 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.59s 2025-09-17 16:18:19.128571 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.57s 2025-09-17 16:18:19.128581 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.51s 2025-09-17 16:18:19.128592 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-09-17 16:18:19.128602 | orchestrator | placement : Check placement containers ---------------------------------- 2.33s 2025-09-17 16:18:19.128613 | orchestrator | placement : Creating placement databases -------------------------------- 2.12s 2025-09-17 16:18:19.128623 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.65s 2025-09-17 16:18:19.128634 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2025-09-17 16:18:19.128645 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.57s 2025-09-17 16:18:19.128655 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.47s 2025-09-17 16:18:19.128666 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-09-17 16:18:19.128676 | orchestrator | placement : Copying over existing policy file --------------------------- 1.15s 2025-09-17 16:18:19.128687 | orchestrator | placement : include_tasks ----------------------------------------------- 1.09s 2025-09-17 16:18:19.128697 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-09-17 16:18:19.128708 | orchestrator | 2025-09-17 16:18:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:22.144245 | orchestrator | 2025-09-17 16:18:22 | INFO  | Task fe9f0427-211f-498f-9262-83214159417c is in state STARTED 2025-09-17 16:18:22.144635 | orchestrator | 2025-09-17 16:18:22 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:22.145211 | orchestrator | 2025-09-17 16:18:22 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:22.145900 | orchestrator | 2025-09-17 16:18:22 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:22.145922 | orchestrator | 2025-09-17 16:18:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:25.190123 | orchestrator | 2025-09-17 16:18:25 | INFO  | Task fe9f0427-211f-498f-9262-83214159417c is in state SUCCESS 2025-09-17 16:18:25.192900 | orchestrator | 2025-09-17 16:18:25 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:25.195261 | orchestrator | 2025-09-17 16:18:25 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:25.197557 | orchestrator | 2025-09-17 16:18:25 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:25.198139 | orchestrator | 2025-09-17 16:18:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:28.232425 | orchestrator | 2025-09-17 16:18:28 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:28.232763 | orchestrator | 2025-09-17 16:18:28 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:28.233483 | orchestrator | 2025-09-17 16:18:28 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:28.234280 | orchestrator | 2025-09-17 16:18:28 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:28.234305 | orchestrator | 2025-09-17 16:18:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:31.257773 | orchestrator | 2025-09-17 16:18:31 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:31.257975 | orchestrator | 2025-09-17 16:18:31 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:31.258576 | orchestrator | 2025-09-17 16:18:31 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:31.259239 | orchestrator | 2025-09-17 16:18:31 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:31.259344 | orchestrator | 2025-09-17 16:18:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:34.296816 | orchestrator | 2025-09-17 16:18:34 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:34.298001 | orchestrator | 2025-09-17 16:18:34 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:34.300213 | orchestrator | 2025-09-17 16:18:34 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:34.301341 | orchestrator | 2025-09-17 16:18:34 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:34.301397 | orchestrator | 2025-09-17 16:18:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:37.328163 | orchestrator | 2025-09-17 16:18:37 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:37.328630 | orchestrator | 2025-09-17 16:18:37 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:37.329794 | orchestrator | 2025-09-17 16:18:37 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:37.330385 | orchestrator | 2025-09-17 16:18:37 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:37.330411 | orchestrator | 2025-09-17 16:18:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:40.368742 | orchestrator | 2025-09-17 16:18:40 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:40.370090 | orchestrator | 2025-09-17 16:18:40 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:40.371760 | orchestrator | 2025-09-17 16:18:40 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:40.373468 | orchestrator | 2025-09-17 16:18:40 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:40.373506 | orchestrator | 2025-09-17 16:18:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:43.406869 | orchestrator | 2025-09-17 16:18:43 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:43.408445 | orchestrator | 2025-09-17 16:18:43 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:43.409267 | orchestrator | 2025-09-17 16:18:43 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:43.409965 | orchestrator | 2025-09-17 16:18:43 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:43.409990 | orchestrator | 2025-09-17 16:18:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:46.431508 | orchestrator | 2025-09-17 16:18:46 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:46.431838 | orchestrator | 2025-09-17 16:18:46 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:46.432668 | orchestrator | 2025-09-17 16:18:46 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:46.433221 | orchestrator | 2025-09-17 16:18:46 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:46.433420 | orchestrator | 2025-09-17 16:18:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:49.475709 | orchestrator | 2025-09-17 16:18:49 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:49.475809 | orchestrator | 2025-09-17 16:18:49 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:49.476328 | orchestrator | 2025-09-17 16:18:49 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:49.476837 | orchestrator | 2025-09-17 16:18:49 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:49.476875 | orchestrator | 2025-09-17 16:18:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:52.518518 | orchestrator | 2025-09-17 16:18:52 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:52.521244 | orchestrator | 2025-09-17 16:18:52 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:52.523592 | orchestrator | 2025-09-17 16:18:52 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:52.525915 | orchestrator | 2025-09-17 16:18:52 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:52.526100 | orchestrator | 2025-09-17 16:18:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:55.555577 | orchestrator | 2025-09-17 16:18:55 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:55.555799 | orchestrator | 2025-09-17 16:18:55 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:55.556745 | orchestrator | 2025-09-17 16:18:55 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:55.557598 | orchestrator | 2025-09-17 16:18:55 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:55.557620 | orchestrator | 2025-09-17 16:18:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:18:58.580233 | orchestrator | 2025-09-17 16:18:58 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:18:58.580484 | orchestrator | 2025-09-17 16:18:58 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:18:58.581507 | orchestrator | 2025-09-17 16:18:58 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:18:58.583099 | orchestrator | 2025-09-17 16:18:58 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:18:58.583121 | orchestrator | 2025-09-17 16:18:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:01.609500 | orchestrator | 2025-09-17 16:19:01 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:01.609784 | orchestrator | 2025-09-17 16:19:01 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:01.610418 | orchestrator | 2025-09-17 16:19:01 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state STARTED 2025-09-17 16:19:01.611105 | orchestrator | 2025-09-17 16:19:01 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:01.611128 | orchestrator | 2025-09-17 16:19:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:04.633916 | orchestrator | 2025-09-17 16:19:04 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:04.634283 | orchestrator | 2025-09-17 16:19:04 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:04.634981 | orchestrator | 2025-09-17 16:19:04 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:04.636563 | orchestrator | 2025-09-17 16:19:04 | INFO  | Task 88aa6be1-35aa-4e7b-9724-630b8cc8ef49 is in state SUCCESS 2025-09-17 16:19:04.638327 | orchestrator | 2025-09-17 16:19:04.638401 | orchestrator | 2025-09-17 16:19:04.638414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:19:04.638425 | orchestrator | 2025-09-17 16:19:04.638723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:19:04.638740 | orchestrator | Wednesday 17 September 2025 16:18:22 +0000 (0:00:00.309) 0:00:00.309 *** 2025-09-17 16:19:04.639111 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:19:04.639550 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:19:04.639568 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:19:04.639579 | orchestrator | 2025-09-17 16:19:04.639590 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:19:04.639601 | orchestrator | Wednesday 17 September 2025 16:18:23 +0000 (0:00:00.245) 0:00:00.555 *** 2025-09-17 16:19:04.639612 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-17 16:19:04.639623 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-17 16:19:04.639634 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-17 16:19:04.639644 | orchestrator | 2025-09-17 16:19:04.639655 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-17 16:19:04.639665 | orchestrator | 2025-09-17 16:19:04.639676 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-17 16:19:04.639686 | orchestrator | Wednesday 17 September 2025 16:18:23 +0000 (0:00:00.437) 0:00:00.992 *** 2025-09-17 16:19:04.639697 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:19:04.639707 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:19:04.639718 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:19:04.639728 | orchestrator | 2025-09-17 16:19:04.639739 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:19:04.639750 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:19:04.639762 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:19:04.639772 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:19:04.639783 | orchestrator | 2025-09-17 16:19:04.639793 | orchestrator | 2025-09-17 16:19:04.639804 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:19:04.639816 | orchestrator | Wednesday 17 September 2025 16:18:24 +0000 (0:00:00.860) 0:00:01.853 *** 2025-09-17 16:19:04.639827 | orchestrator | =============================================================================== 2025-09-17 16:19:04.639837 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.86s 2025-09-17 16:19:04.639848 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-17 16:19:04.639871 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-09-17 16:19:04.639882 | orchestrator | 2025-09-17 16:19:04.639893 | orchestrator | 2025-09-17 16:19:04.639903 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:19:04.639914 | orchestrator | 2025-09-17 16:19:04.639924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:19:04.639935 | orchestrator | Wednesday 17 September 2025 16:16:14 +0000 (0:00:00.272) 0:00:00.272 *** 2025-09-17 16:19:04.639965 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:19:04.639976 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:19:04.639987 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:19:04.639997 | orchestrator | 2025-09-17 16:19:04.640008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:19:04.640019 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.328) 0:00:00.601 *** 2025-09-17 16:19:04.640030 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-17 16:19:04.640041 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-17 16:19:04.640051 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-17 16:19:04.640062 | orchestrator | 2025-09-17 16:19:04.640072 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-17 16:19:04.640083 | orchestrator | 2025-09-17 16:19:04.640093 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 16:19:04.640104 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.585) 0:00:01.186 *** 2025-09-17 16:19:04.640115 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:19:04.640125 | orchestrator | 2025-09-17 16:19:04.640136 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-17 16:19:04.640146 | orchestrator | Wednesday 17 September 2025 16:16:16 +0000 (0:00:00.577) 0:00:01.764 *** 2025-09-17 16:19:04.640157 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-17 16:19:04.640167 | orchestrator | 2025-09-17 16:19:04.640178 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-17 16:19:04.640190 | orchestrator | Wednesday 17 September 2025 16:16:19 +0000 (0:00:03.701) 0:00:05.465 *** 2025-09-17 16:19:04.640203 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-17 16:19:04.640215 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-17 16:19:04.640228 | orchestrator | 2025-09-17 16:19:04.640240 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-17 16:19:04.640252 | orchestrator | Wednesday 17 September 2025 16:16:27 +0000 (0:00:07.418) 0:00:12.884 *** 2025-09-17 16:19:04.640265 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:19:04.640277 | orchestrator | 2025-09-17 16:19:04.640290 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-17 16:19:04.640302 | orchestrator | Wednesday 17 September 2025 16:16:30 +0000 (0:00:03.377) 0:00:16.261 *** 2025-09-17 16:19:04.640425 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:19:04.640443 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-17 16:19:04.640455 | orchestrator | 2025-09-17 16:19:04.640468 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-17 16:19:04.640480 | orchestrator | Wednesday 17 September 2025 16:16:34 +0000 (0:00:03.855) 0:00:20.117 *** 2025-09-17 16:19:04.640493 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:19:04.640506 | orchestrator | 2025-09-17 16:19:04.640518 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-17 16:19:04.640531 | orchestrator | Wednesday 17 September 2025 16:16:38 +0000 (0:00:03.420) 0:00:23.537 *** 2025-09-17 16:19:04.640544 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-17 16:19:04.640556 | orchestrator | 2025-09-17 16:19:04.640568 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-17 16:19:04.640581 | orchestrator | Wednesday 17 September 2025 16:16:42 +0000 (0:00:04.271) 0:00:27.809 *** 2025-09-17 16:19:04.640594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.640625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.640637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.640650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.640943 | orchestrator | 2025-09-17 16:19:04.640954 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-17 16:19:04.640972 | orchestrator | Wednesday 17 September 2025 16:16:45 +0000 (0:00:03.149) 0:00:30.958 *** 2025-09-17 16:19:04.640983 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.640994 | orchestrator | 2025-09-17 16:19:04.641005 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-17 16:19:04.641015 | orchestrator | Wednesday 17 September 2025 16:16:45 +0000 (0:00:00.119) 0:00:31.078 *** 2025-09-17 16:19:04.641026 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.641036 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.641047 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.641057 | orchestrator | 2025-09-17 16:19:04.641068 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 16:19:04.641079 | orchestrator | Wednesday 17 September 2025 16:16:45 +0000 (0:00:00.265) 0:00:31.344 *** 2025-09-17 16:19:04.641090 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:19:04.641101 | orchestrator | 2025-09-17 16:19:04.641112 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-17 16:19:04.641122 | orchestrator | Wednesday 17 September 2025 16:16:46 +0000 (0:00:00.653) 0:00:31.997 *** 2025-09-17 16:19:04.641138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.641150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.641162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.641203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.641556 | orchestrator | 2025-09-17 16:19:04.641567 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-17 16:19:04.641578 | orchestrator | Wednesday 17 September 2025 16:16:52 +0000 (0:00:06.460) 0:00:38.457 *** 2025-09-17 16:19:04.641590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.641606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.641617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641699 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.641709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.641723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.641733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641806 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.641816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.641830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.641840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641914 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.641924 | orchestrator | 2025-09-17 16:19:04.641933 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-17 16:19:04.641943 | orchestrator | Wednesday 17 September 2025 16:16:53 +0000 (0:00:00.919) 0:00:39.377 *** 2025-09-17 16:19:04.641953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.641967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.641977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.641993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642092 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.642103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.642118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.642135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642205 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.642215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.642229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.642245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.642314 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.642324 | orchestrator | 2025-09-17 16:19:04.642334 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-17 16:19:04.642344 | orchestrator | Wednesday 17 September 2025 16:16:55 +0000 (0:00:01.440) 0:00:40.817 *** 2025-09-17 16:19:04.642369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642669 | orchestrator | 2025-09-17 16:19:04.642679 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-17 16:19:04.642688 | orchestrator | Wednesday 17 September 2025 16:17:02 +0000 (0:00:07.557) 0:00:48.375 *** 2025-09-17 16:19:04.642698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.642743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.642923 | orchestrator | 2025-09-17 16:19:04.642933 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-17 16:19:04.642942 | orchestrator | Wednesday 17 September 2025 16:17:21 +0000 (0:00:19.082) 0:01:07.457 *** 2025-09-17 16:19:04.642952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 16:19:04.642962 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 16:19:04.642976 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-17 16:19:04.642986 | orchestrator | 2025-09-17 16:19:04.642995 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-17 16:19:04.643004 | orchestrator | Wednesday 17 September 2025 16:17:28 +0000 (0:00:06.926) 0:01:14.384 *** 2025-09-17 16:19:04.643014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 16:19:04.643023 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 16:19:04.643032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-17 16:19:04.643042 | orchestrator | 2025-09-17 16:19:04.643051 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-17 16:19:04.643061 | orchestrator | Wednesday 17 September 2025 16:17:32 +0000 (0:00:03.112) 0:01:17.497 *** 2025-09-17 16:19:04.643074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643295 | orchestrator | 2025-09-17 16:19:04.643305 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-17 16:19:04.643314 | orchestrator | Wednesday 17 September 2025 16:17:35 +0000 (0:00:03.661) 0:01:21.159 *** 2025-09-17 16:19:04.643328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.643563 | orchestrator | 2025-09-17 16:19:04.643572 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 16:19:04.643582 | orchestrator | Wednesday 17 September 2025 16:17:38 +0000 (0:00:03.275) 0:01:24.435 *** 2025-09-17 16:19:04.643592 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.643602 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.643611 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.643620 | orchestrator | 2025-09-17 16:19:04.643630 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-17 16:19:04.643639 | orchestrator | Wednesday 17 September 2025 16:17:39 +0000 (0:00:00.434) 0:01:24.869 *** 2025-09-17 16:19:04.643653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.643674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.643728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643798 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.643807 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.643821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-17 16:19:04.643831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-17 16:19:04.643849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-17 16:19:04.643893 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.643903 | orchestrator | 2025-09-17 16:19:04.643912 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-17 16:19:04.643922 | orchestrator | Wednesday 17 September 2025 16:17:41 +0000 (0:00:01.709) 0:01:26.579 *** 2025-09-17 16:19:04.644024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.644049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.644072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-17 16:19:04.644083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-17 16:19:04.644262 | orchestrator | 2025-09-17 16:19:04.644272 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-17 16:19:04.644282 | orchestrator | Wednesday 17 September 2025 16:17:45 +0000 (0:00:04.615) 0:01:31.194 *** 2025-09-17 16:19:04.644292 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:19:04.644301 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:19:04.644311 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:19:04.644320 | orchestrator | 2025-09-17 16:19:04.644330 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-17 16:19:04.644339 | orchestrator | Wednesday 17 September 2025 16:17:45 +0000 (0:00:00.240) 0:01:31.434 *** 2025-09-17 16:19:04.644349 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-17 16:19:04.644404 | orchestrator | 2025-09-17 16:19:04.644414 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-17 16:19:04.644430 | orchestrator | Wednesday 17 September 2025 16:17:48 +0000 (0:00:02.337) 0:01:33.772 *** 2025-09-17 16:19:04.644439 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:19:04.644449 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-17 16:19:04.644458 | orchestrator | 2025-09-17 16:19:04.644468 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-17 16:19:04.644481 | orchestrator | Wednesday 17 September 2025 16:17:51 +0000 (0:00:02.853) 0:01:36.625 *** 2025-09-17 16:19:04.644491 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644500 | orchestrator | 2025-09-17 16:19:04.644510 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 16:19:04.644519 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:16.939) 0:01:53.565 *** 2025-09-17 16:19:04.644529 | orchestrator | 2025-09-17 16:19:04.644538 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 16:19:04.644547 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.061) 0:01:53.627 *** 2025-09-17 16:19:04.644557 | orchestrator | 2025-09-17 16:19:04.644566 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-17 16:19:04.644576 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.060) 0:01:53.687 *** 2025-09-17 16:19:04.644585 | orchestrator | 2025-09-17 16:19:04.644594 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-17 16:19:04.644604 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:00.061) 0:01:53.748 *** 2025-09-17 16:19:04.644613 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644623 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644632 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644642 | orchestrator | 2025-09-17 16:19:04.644650 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-17 16:19:04.644658 | orchestrator | Wednesday 17 September 2025 16:18:17 +0000 (0:00:09.046) 0:02:02.795 *** 2025-09-17 16:19:04.644666 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644674 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644681 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644689 | orchestrator | 2025-09-17 16:19:04.644697 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-17 16:19:04.644704 | orchestrator | Wednesday 17 September 2025 16:18:28 +0000 (0:00:11.614) 0:02:14.410 *** 2025-09-17 16:19:04.644712 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644720 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644727 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644735 | orchestrator | 2025-09-17 16:19:04.644743 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-17 16:19:04.644750 | orchestrator | Wednesday 17 September 2025 16:18:34 +0000 (0:00:05.421) 0:02:19.831 *** 2025-09-17 16:19:04.644758 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644766 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644773 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644781 | orchestrator | 2025-09-17 16:19:04.644789 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-17 16:19:04.644797 | orchestrator | Wednesday 17 September 2025 16:18:42 +0000 (0:00:08.444) 0:02:28.276 *** 2025-09-17 16:19:04.644804 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644812 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644820 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644827 | orchestrator | 2025-09-17 16:19:04.644835 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-17 16:19:04.644847 | orchestrator | Wednesday 17 September 2025 16:18:48 +0000 (0:00:05.865) 0:02:34.141 *** 2025-09-17 16:19:04.644855 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644863 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:19:04.644871 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:19:04.644883 | orchestrator | 2025-09-17 16:19:04.644891 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-17 16:19:04.644899 | orchestrator | Wednesday 17 September 2025 16:18:54 +0000 (0:00:05.891) 0:02:40.033 *** 2025-09-17 16:19:04.644907 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:19:04.644914 | orchestrator | 2025-09-17 16:19:04.644922 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:19:04.644930 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 16:19:04.644939 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:19:04.644947 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:19:04.644955 | orchestrator | 2025-09-17 16:19:04.644962 | orchestrator | 2025-09-17 16:19:04.644970 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:19:04.644978 | orchestrator | Wednesday 17 September 2025 16:19:02 +0000 (0:00:08.280) 0:02:48.313 *** 2025-09-17 16:19:04.644986 | orchestrator | =============================================================================== 2025-09-17 16:19:04.644993 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.08s 2025-09-17 16:19:04.645001 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.94s 2025-09-17 16:19:04.645009 | orchestrator | designate : Restart designate-api container ---------------------------- 11.61s 2025-09-17 16:19:04.645017 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.05s 2025-09-17 16:19:04.645024 | orchestrator | designate : Restart designate-producer container ------------------------ 8.44s 2025-09-17 16:19:04.645032 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.28s 2025-09-17 16:19:04.645040 | orchestrator | designate : Copying over config.json files for services ----------------- 7.56s 2025-09-17 16:19:04.645047 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.42s 2025-09-17 16:19:04.645055 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.93s 2025-09-17 16:19:04.645063 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.46s 2025-09-17 16:19:04.645074 | orchestrator | designate : Restart designate-worker container -------------------------- 5.89s 2025-09-17 16:19:04.645082 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.87s 2025-09-17 16:19:04.645089 | orchestrator | designate : Restart designate-central container ------------------------- 5.42s 2025-09-17 16:19:04.645097 | orchestrator | designate : Check designate containers ---------------------------------- 4.62s 2025-09-17 16:19:04.645105 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.27s 2025-09-17 16:19:04.645112 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.86s 2025-09-17 16:19:04.645120 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.70s 2025-09-17 16:19:04.645128 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.66s 2025-09-17 16:19:04.645136 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.42s 2025-09-17 16:19:04.645143 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.38s 2025-09-17 16:19:04.645151 | orchestrator | 2025-09-17 16:19:04 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:04.645159 | orchestrator | 2025-09-17 16:19:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:07.678936 | orchestrator | 2025-09-17 16:19:07 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:07.679500 | orchestrator | 2025-09-17 16:19:07 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:07.681324 | orchestrator | 2025-09-17 16:19:07 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:07.683088 | orchestrator | 2025-09-17 16:19:07 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:07.683271 | orchestrator | 2025-09-17 16:19:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:10.716212 | orchestrator | 2025-09-17 16:19:10 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:10.718692 | orchestrator | 2025-09-17 16:19:10 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:10.721831 | orchestrator | 2025-09-17 16:19:10 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:10.724486 | orchestrator | 2025-09-17 16:19:10 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:10.725238 | orchestrator | 2025-09-17 16:19:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:13.762825 | orchestrator | 2025-09-17 16:19:13 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:13.762913 | orchestrator | 2025-09-17 16:19:13 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:13.763470 | orchestrator | 2025-09-17 16:19:13 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:13.764240 | orchestrator | 2025-09-17 16:19:13 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:13.764262 | orchestrator | 2025-09-17 16:19:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:16.797062 | orchestrator | 2025-09-17 16:19:16 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:16.797559 | orchestrator | 2025-09-17 16:19:16 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:16.799944 | orchestrator | 2025-09-17 16:19:16 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:16.800196 | orchestrator | 2025-09-17 16:19:16 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:16.801041 | orchestrator | 2025-09-17 16:19:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:19.833443 | orchestrator | 2025-09-17 16:19:19 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:19.833602 | orchestrator | 2025-09-17 16:19:19 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:19.834080 | orchestrator | 2025-09-17 16:19:19 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:19.836026 | orchestrator | 2025-09-17 16:19:19 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:19.836045 | orchestrator | 2025-09-17 16:19:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:22.856469 | orchestrator | 2025-09-17 16:19:22 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:22.856639 | orchestrator | 2025-09-17 16:19:22 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:22.857296 | orchestrator | 2025-09-17 16:19:22 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:22.858065 | orchestrator | 2025-09-17 16:19:22 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:22.858088 | orchestrator | 2025-09-17 16:19:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:25.897732 | orchestrator | 2025-09-17 16:19:25 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:25.900513 | orchestrator | 2025-09-17 16:19:25 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:25.902002 | orchestrator | 2025-09-17 16:19:25 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:25.903544 | orchestrator | 2025-09-17 16:19:25 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:25.903761 | orchestrator | 2025-09-17 16:19:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:28.946298 | orchestrator | 2025-09-17 16:19:28 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:28.948340 | orchestrator | 2025-09-17 16:19:28 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:28.950259 | orchestrator | 2025-09-17 16:19:28 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:28.952014 | orchestrator | 2025-09-17 16:19:28 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:28.952404 | orchestrator | 2025-09-17 16:19:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:31.987839 | orchestrator | 2025-09-17 16:19:31 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:31.989345 | orchestrator | 2025-09-17 16:19:31 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:31.991929 | orchestrator | 2025-09-17 16:19:31 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:31.995433 | orchestrator | 2025-09-17 16:19:31 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:31.995507 | orchestrator | 2025-09-17 16:19:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:35.028283 | orchestrator | 2025-09-17 16:19:35 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:35.029083 | orchestrator | 2025-09-17 16:19:35 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:35.029112 | orchestrator | 2025-09-17 16:19:35 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:35.032427 | orchestrator | 2025-09-17 16:19:35 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:35.032452 | orchestrator | 2025-09-17 16:19:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:38.063593 | orchestrator | 2025-09-17 16:19:38 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state STARTED 2025-09-17 16:19:38.063771 | orchestrator | 2025-09-17 16:19:38 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:38.064551 | orchestrator | 2025-09-17 16:19:38 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:38.064989 | orchestrator | 2025-09-17 16:19:38 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:38.065124 | orchestrator | 2025-09-17 16:19:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:41.089280 | orchestrator | 2025-09-17 16:19:41 | INFO  | Task e5231dda-8174-4b71-a2b9-2686809e734c is in state SUCCESS 2025-09-17 16:19:41.089580 | orchestrator | 2025-09-17 16:19:41 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:41.090173 | orchestrator | 2025-09-17 16:19:41 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:41.090937 | orchestrator | 2025-09-17 16:19:41 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:41.091681 | orchestrator | 2025-09-17 16:19:41 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:41.091708 | orchestrator | 2025-09-17 16:19:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:44.121493 | orchestrator | 2025-09-17 16:19:44 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:44.121597 | orchestrator | 2025-09-17 16:19:44 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:44.121612 | orchestrator | 2025-09-17 16:19:44 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:44.121623 | orchestrator | 2025-09-17 16:19:44 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:44.121634 | orchestrator | 2025-09-17 16:19:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:47.151272 | orchestrator | 2025-09-17 16:19:47 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:47.154007 | orchestrator | 2025-09-17 16:19:47 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:47.157676 | orchestrator | 2025-09-17 16:19:47 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:47.159682 | orchestrator | 2025-09-17 16:19:47 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:47.160420 | orchestrator | 2025-09-17 16:19:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:50.184914 | orchestrator | 2025-09-17 16:19:50 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:50.184997 | orchestrator | 2025-09-17 16:19:50 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:50.185011 | orchestrator | 2025-09-17 16:19:50 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:50.185283 | orchestrator | 2025-09-17 16:19:50 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:50.185306 | orchestrator | 2025-09-17 16:19:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:53.223867 | orchestrator | 2025-09-17 16:19:53 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:53.223942 | orchestrator | 2025-09-17 16:19:53 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:53.223955 | orchestrator | 2025-09-17 16:19:53 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:53.223966 | orchestrator | 2025-09-17 16:19:53 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:53.223977 | orchestrator | 2025-09-17 16:19:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:56.255718 | orchestrator | 2025-09-17 16:19:56 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:56.259515 | orchestrator | 2025-09-17 16:19:56 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:56.266316 | orchestrator | 2025-09-17 16:19:56 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:56.269374 | orchestrator | 2025-09-17 16:19:56 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:56.269403 | orchestrator | 2025-09-17 16:19:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:19:59.333541 | orchestrator | 2025-09-17 16:19:59 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:19:59.334946 | orchestrator | 2025-09-17 16:19:59 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:19:59.336914 | orchestrator | 2025-09-17 16:19:59 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:19:59.338161 | orchestrator | 2025-09-17 16:19:59 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:19:59.339071 | orchestrator | 2025-09-17 16:19:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:02.371569 | orchestrator | 2025-09-17 16:20:02 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:02.372430 | orchestrator | 2025-09-17 16:20:02 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:20:02.373733 | orchestrator | 2025-09-17 16:20:02 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:02.374406 | orchestrator | 2025-09-17 16:20:02 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:02.374430 | orchestrator | 2025-09-17 16:20:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:05.409248 | orchestrator | 2025-09-17 16:20:05 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:05.411135 | orchestrator | 2025-09-17 16:20:05 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:20:05.412811 | orchestrator | 2025-09-17 16:20:05 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:05.414306 | orchestrator | 2025-09-17 16:20:05 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:05.414341 | orchestrator | 2025-09-17 16:20:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:08.463902 | orchestrator | 2025-09-17 16:20:08 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:08.464074 | orchestrator | 2025-09-17 16:20:08 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:20:08.464734 | orchestrator | 2025-09-17 16:20:08 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:08.465419 | orchestrator | 2025-09-17 16:20:08 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:08.465441 | orchestrator | 2025-09-17 16:20:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:11.495447 | orchestrator | 2025-09-17 16:20:11 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:11.495523 | orchestrator | 2025-09-17 16:20:11 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state STARTED 2025-09-17 16:20:11.496108 | orchestrator | 2025-09-17 16:20:11 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:11.496785 | orchestrator | 2025-09-17 16:20:11 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:11.496918 | orchestrator | 2025-09-17 16:20:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:14.521037 | orchestrator | 2025-09-17 16:20:14 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:14.521914 | orchestrator | 2025-09-17 16:20:14 | INFO  | Task aaef5f4a-fa6c-4594-80f0-34ee8a52bfe8 is in state SUCCESS 2025-09-17 16:20:14.523680 | orchestrator | 2025-09-17 16:20:14.523714 | orchestrator | 2025-09-17 16:20:14.523726 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:20:14.523737 | orchestrator | 2025-09-17 16:20:14.523752 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:20:14.523811 | orchestrator | Wednesday 17 September 2025 16:19:07 +0000 (0:00:00.229) 0:00:00.229 *** 2025-09-17 16:20:14.523824 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:14.523834 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:14.523844 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:14.523853 | orchestrator | ok: [testbed-manager] 2025-09-17 16:20:14.523862 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:20:14.523871 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:20:14.523881 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:20:14.523890 | orchestrator | 2025-09-17 16:20:14.523899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:20:14.523909 | orchestrator | Wednesday 17 September 2025 16:19:07 +0000 (0:00:00.719) 0:00:00.948 *** 2025-09-17 16:20:14.523918 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523928 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523938 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523947 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523956 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523966 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523975 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-17 16:20:14.523984 | orchestrator | 2025-09-17 16:20:14.523994 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-17 16:20:14.524003 | orchestrator | 2025-09-17 16:20:14.524020 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-17 16:20:14.524102 | orchestrator | Wednesday 17 September 2025 16:19:08 +0000 (0:00:00.625) 0:00:01.573 *** 2025-09-17 16:20:14.524115 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:20:14.524126 | orchestrator | 2025-09-17 16:20:14.524135 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-17 16:20:14.524145 | orchestrator | Wednesday 17 September 2025 16:19:09 +0000 (0:00:01.339) 0:00:02.913 *** 2025-09-17 16:20:14.524154 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-17 16:20:14.524164 | orchestrator | 2025-09-17 16:20:14.524173 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-17 16:20:14.524183 | orchestrator | Wednesday 17 September 2025 16:19:13 +0000 (0:00:03.220) 0:00:06.133 *** 2025-09-17 16:20:14.524193 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-17 16:20:14.524203 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-17 16:20:14.524213 | orchestrator | 2025-09-17 16:20:14.524223 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-17 16:20:14.524233 | orchestrator | Wednesday 17 September 2025 16:19:19 +0000 (0:00:06.817) 0:00:12.951 *** 2025-09-17 16:20:14.524243 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:20:14.524252 | orchestrator | 2025-09-17 16:20:14.524273 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-17 16:20:14.524283 | orchestrator | Wednesday 17 September 2025 16:19:23 +0000 (0:00:03.369) 0:00:16.320 *** 2025-09-17 16:20:14.524292 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:20:14.524302 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-17 16:20:14.524311 | orchestrator | 2025-09-17 16:20:14.524321 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-17 16:20:14.524330 | orchestrator | Wednesday 17 September 2025 16:19:27 +0000 (0:00:03.858) 0:00:20.179 *** 2025-09-17 16:20:14.524340 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:20:14.524390 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-17 16:20:14.524400 | orchestrator | 2025-09-17 16:20:14.524409 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-17 16:20:14.524419 | orchestrator | Wednesday 17 September 2025 16:19:33 +0000 (0:00:06.373) 0:00:26.554 *** 2025-09-17 16:20:14.524428 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-17 16:20:14.524438 | orchestrator | 2025-09-17 16:20:14.524447 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:20:14.524457 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524467 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524477 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524486 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524496 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524518 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524529 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:20:14.524538 | orchestrator | 2025-09-17 16:20:14.524547 | orchestrator | 2025-09-17 16:20:14.524557 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:20:14.524567 | orchestrator | Wednesday 17 September 2025 16:19:38 +0000 (0:00:05.305) 0:00:31.859 *** 2025-09-17 16:20:14.524576 | orchestrator | =============================================================================== 2025-09-17 16:20:14.524586 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.82s 2025-09-17 16:20:14.524595 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.37s 2025-09-17 16:20:14.524606 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.31s 2025-09-17 16:20:14.524615 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.86s 2025-09-17 16:20:14.524625 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.37s 2025-09-17 16:20:14.524634 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.22s 2025-09-17 16:20:14.524644 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.34s 2025-09-17 16:20:14.524653 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-09-17 16:20:14.524662 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-17 16:20:14.524672 | orchestrator | 2025-09-17 16:20:14.524681 | orchestrator | 2025-09-17 16:20:14.524691 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:20:14.524700 | orchestrator | 2025-09-17 16:20:14.524710 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:20:14.524719 | orchestrator | Wednesday 17 September 2025 16:18:21 +0000 (0:00:00.246) 0:00:00.246 *** 2025-09-17 16:20:14.524730 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:14.524747 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:14.524764 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:14.524894 | orchestrator | 2025-09-17 16:20:14.524918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:20:14.524934 | orchestrator | Wednesday 17 September 2025 16:18:22 +0000 (0:00:00.414) 0:00:00.661 *** 2025-09-17 16:20:14.524946 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-17 16:20:14.524966 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-17 16:20:14.524978 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-17 16:20:14.524989 | orchestrator | 2025-09-17 16:20:14.525000 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-17 16:20:14.525010 | orchestrator | 2025-09-17 16:20:14.525021 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 16:20:14.525032 | orchestrator | Wednesday 17 September 2025 16:18:22 +0000 (0:00:00.635) 0:00:01.296 *** 2025-09-17 16:20:14.525043 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:20:14.525054 | orchestrator | 2025-09-17 16:20:14.525065 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-17 16:20:14.525076 | orchestrator | Wednesday 17 September 2025 16:18:23 +0000 (0:00:00.605) 0:00:01.901 *** 2025-09-17 16:20:14.525092 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-17 16:20:14.525102 | orchestrator | 2025-09-17 16:20:14.525111 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-17 16:20:14.525121 | orchestrator | Wednesday 17 September 2025 16:18:26 +0000 (0:00:03.509) 0:00:05.410 *** 2025-09-17 16:20:14.525130 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-17 16:20:14.525139 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-17 16:20:14.525149 | orchestrator | 2025-09-17 16:20:14.525158 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-17 16:20:14.525168 | orchestrator | Wednesday 17 September 2025 16:18:32 +0000 (0:00:05.978) 0:00:11.389 *** 2025-09-17 16:20:14.525177 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:20:14.525186 | orchestrator | 2025-09-17 16:20:14.525196 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-17 16:20:14.525205 | orchestrator | Wednesday 17 September 2025 16:18:36 +0000 (0:00:03.418) 0:00:14.808 *** 2025-09-17 16:20:14.525215 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:20:14.525224 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-17 16:20:14.525233 | orchestrator | 2025-09-17 16:20:14.525243 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-17 16:20:14.525252 | orchestrator | Wednesday 17 September 2025 16:18:40 +0000 (0:00:04.159) 0:00:18.967 *** 2025-09-17 16:20:14.525261 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:20:14.525270 | orchestrator | 2025-09-17 16:20:14.525280 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-17 16:20:14.525290 | orchestrator | Wednesday 17 September 2025 16:18:44 +0000 (0:00:03.776) 0:00:22.745 *** 2025-09-17 16:20:14.525299 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-17 16:20:14.525308 | orchestrator | 2025-09-17 16:20:14.525318 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-17 16:20:14.525327 | orchestrator | Wednesday 17 September 2025 16:18:48 +0000 (0:00:04.051) 0:00:26.797 *** 2025-09-17 16:20:14.525336 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.525370 | orchestrator | 2025-09-17 16:20:14.525380 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-17 16:20:14.525399 | orchestrator | Wednesday 17 September 2025 16:18:51 +0000 (0:00:03.501) 0:00:30.298 *** 2025-09-17 16:20:14.525409 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.525418 | orchestrator | 2025-09-17 16:20:14.525428 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-17 16:20:14.525437 | orchestrator | Wednesday 17 September 2025 16:18:56 +0000 (0:00:04.391) 0:00:34.690 *** 2025-09-17 16:20:14.525446 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.525456 | orchestrator | 2025-09-17 16:20:14.525465 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-17 16:20:14.525481 | orchestrator | Wednesday 17 September 2025 16:18:59 +0000 (0:00:03.930) 0:00:38.620 *** 2025-09-17 16:20:14.525495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525578 | orchestrator | 2025-09-17 16:20:14.525588 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-17 16:20:14.525597 | orchestrator | Wednesday 17 September 2025 16:19:01 +0000 (0:00:01.676) 0:00:40.297 *** 2025-09-17 16:20:14.525607 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.525616 | orchestrator | 2025-09-17 16:20:14.525626 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-17 16:20:14.525636 | orchestrator | Wednesday 17 September 2025 16:19:01 +0000 (0:00:00.201) 0:00:40.499 *** 2025-09-17 16:20:14.525645 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.525654 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:14.525664 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:14.525674 | orchestrator | 2025-09-17 16:20:14.525683 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-17 16:20:14.525693 | orchestrator | Wednesday 17 September 2025 16:19:02 +0000 (0:00:00.785) 0:00:41.285 *** 2025-09-17 16:20:14.525702 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:20:14.525712 | orchestrator | 2025-09-17 16:20:14.525721 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-17 16:20:14.525731 | orchestrator | Wednesday 17 September 2025 16:19:03 +0000 (0:00:00.940) 0:00:42.225 *** 2025-09-17 16:20:14.525745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.525823 | orchestrator | 2025-09-17 16:20:14.525832 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-17 16:20:14.525842 | orchestrator | Wednesday 17 September 2025 16:19:06 +0000 (0:00:02.520) 0:00:44.745 *** 2025-09-17 16:20:14.525852 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:14.525861 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:14.525871 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:14.525880 | orchestrator | 2025-09-17 16:20:14.525896 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 16:20:14.525905 | orchestrator | Wednesday 17 September 2025 16:19:06 +0000 (0:00:00.297) 0:00:45.042 *** 2025-09-17 16:20:14.525915 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:20:14.525924 | orchestrator | 2025-09-17 16:20:14.525934 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-17 16:20:14.525944 | orchestrator | Wednesday 17 September 2025 16:19:06 +0000 (0:00:00.572) 0:00:45.615 *** 2025-09-17 16:20:14.525960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.525996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526117 | orchestrator | 2025-09-17 16:20:14.526126 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-17 16:20:14.526136 | orchestrator | Wednesday 17 September 2025 16:19:09 +0000 (0:00:02.486) 0:00:48.101 *** 2025-09-17 16:20:14.526146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526176 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.526185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526218 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:14.526228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526248 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:14.526257 | orchestrator | 2025-09-17 16:20:14.526267 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-17 16:20:14.526277 | orchestrator | Wednesday 17 September 2025 16:19:10 +0000 (0:00:00.537) 0:00:48.639 *** 2025-09-17 16:20:14.526291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526316 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.526334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526400 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:14.526410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526440 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:14.526450 | orchestrator | 2025-09-17 16:20:14.526460 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-17 16:20:14.526470 | orchestrator | Wednesday 17 September 2025 16:19:10 +0000 (0:00:00.982) 0:00:49.622 *** 2025-09-17 16:20:14.526480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526558 | orchestrator | 2025-09-17 16:20:14.526573 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-17 16:20:14.526583 | orchestrator | Wednesday 17 September 2025 16:19:13 +0000 (0:00:02.048) 0:00:51.670 *** 2025-09-17 16:20:14.526593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526668 | orchestrator | 2025-09-17 16:20:14.526678 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-17 16:20:14.526688 | orchestrator | Wednesday 17 September 2025 16:19:19 +0000 (0:00:06.256) 0:00:57.926 *** 2025-09-17 16:20:14.526698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526726 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.526736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526763 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:14.526773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-17 16:20:14.526783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:20:14.526798 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:14.526808 | orchestrator | 2025-09-17 16:20:14.526818 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-17 16:20:14.526828 | orchestrator | Wednesday 17 September 2025 16:19:20 +0000 (0:00:00.972) 0:00:58.899 *** 2025-09-17 16:20:14.526841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-17 16:20:14.526877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:20:14.526912 | orchestrator | 2025-09-17 16:20:14.526920 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-17 16:20:14.526927 | orchestrator | Wednesday 17 September 2025 16:19:22 +0000 (0:00:02.192) 0:01:01.091 *** 2025-09-17 16:20:14.526935 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:14.526943 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:14.526951 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:14.526959 | orchestrator | 2025-09-17 16:20:14.526966 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-17 16:20:14.526974 | orchestrator | Wednesday 17 September 2025 16:19:22 +0000 (0:00:00.333) 0:01:01.425 *** 2025-09-17 16:20:14.526982 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.526990 | orchestrator | 2025-09-17 16:20:14.526998 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-17 16:20:14.527005 | orchestrator | Wednesday 17 September 2025 16:19:24 +0000 (0:00:02.057) 0:01:03.482 *** 2025-09-17 16:20:14.527013 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.527021 | orchestrator | 2025-09-17 16:20:14.527029 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-17 16:20:14.527041 | orchestrator | Wednesday 17 September 2025 16:19:26 +0000 (0:00:02.104) 0:01:05.587 *** 2025-09-17 16:20:14.527049 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.527056 | orchestrator | 2025-09-17 16:20:14.527064 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 16:20:14.527072 | orchestrator | Wednesday 17 September 2025 16:19:41 +0000 (0:00:15.047) 0:01:20.635 *** 2025-09-17 16:20:14.527080 | orchestrator | 2025-09-17 16:20:14.527087 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 16:20:14.527095 | orchestrator | Wednesday 17 September 2025 16:19:42 +0000 (0:00:00.064) 0:01:20.699 *** 2025-09-17 16:20:14.527103 | orchestrator | 2025-09-17 16:20:14.527111 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-17 16:20:14.527123 | orchestrator | Wednesday 17 September 2025 16:19:42 +0000 (0:00:00.067) 0:01:20.767 *** 2025-09-17 16:20:14.527130 | orchestrator | 2025-09-17 16:20:14.527138 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-17 16:20:14.527146 | orchestrator | Wednesday 17 September 2025 16:19:42 +0000 (0:00:00.059) 0:01:20.827 *** 2025-09-17 16:20:14.527153 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.527161 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:20:14.527169 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:20:14.527177 | orchestrator | 2025-09-17 16:20:14.527185 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-17 16:20:14.527192 | orchestrator | Wednesday 17 September 2025 16:19:56 +0000 (0:00:14.524) 0:01:35.351 *** 2025-09-17 16:20:14.527200 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:14.527208 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:20:14.527216 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:20:14.527223 | orchestrator | 2025-09-17 16:20:14.527231 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:20:14.527239 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-17 16:20:14.527247 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:20:14.527255 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-17 16:20:14.527263 | orchestrator | 2025-09-17 16:20:14.527271 | orchestrator | 2025-09-17 16:20:14.527279 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:20:14.527286 | orchestrator | Wednesday 17 September 2025 16:20:13 +0000 (0:00:16.607) 0:01:51.959 *** 2025-09-17 16:20:14.527294 | orchestrator | =============================================================================== 2025-09-17 16:20:14.527302 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.61s 2025-09-17 16:20:14.527310 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.05s 2025-09-17 16:20:14.527317 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.52s 2025-09-17 16:20:14.527325 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.26s 2025-09-17 16:20:14.527333 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.98s 2025-09-17 16:20:14.527341 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.39s 2025-09-17 16:20:14.527383 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.16s 2025-09-17 16:20:14.527392 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.05s 2025-09-17 16:20:14.527400 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.93s 2025-09-17 16:20:14.527407 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.78s 2025-09-17 16:20:14.527415 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.51s 2025-09-17 16:20:14.527423 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.50s 2025-09-17 16:20:14.527430 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.42s 2025-09-17 16:20:14.527438 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.52s 2025-09-17 16:20:14.527446 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.49s 2025-09-17 16:20:14.527453 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.19s 2025-09-17 16:20:14.527461 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.10s 2025-09-17 16:20:14.527468 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.06s 2025-09-17 16:20:14.527481 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.05s 2025-09-17 16:20:14.527489 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.68s 2025-09-17 16:20:14.527496 | orchestrator | 2025-09-17 16:20:14 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:14.527503 | orchestrator | 2025-09-17 16:20:14 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:14.527510 | orchestrator | 2025-09-17 16:20:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:17.555707 | orchestrator | 2025-09-17 16:20:17 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:17.556859 | orchestrator | 2025-09-17 16:20:17 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:17.558527 | orchestrator | 2025-09-17 16:20:17 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:17.560978 | orchestrator | 2025-09-17 16:20:17 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:17.560999 | orchestrator | 2025-09-17 16:20:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:20.587572 | orchestrator | 2025-09-17 16:20:20 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:20.588886 | orchestrator | 2025-09-17 16:20:20 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:20.590328 | orchestrator | 2025-09-17 16:20:20 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:20.591421 | orchestrator | 2025-09-17 16:20:20 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:20.591615 | orchestrator | 2025-09-17 16:20:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:23.627868 | orchestrator | 2025-09-17 16:20:23 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:23.628590 | orchestrator | 2025-09-17 16:20:23 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:23.628907 | orchestrator | 2025-09-17 16:20:23 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:23.629889 | orchestrator | 2025-09-17 16:20:23 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:23.629912 | orchestrator | 2025-09-17 16:20:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:26.672669 | orchestrator | 2025-09-17 16:20:26 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:26.673021 | orchestrator | 2025-09-17 16:20:26 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:26.675272 | orchestrator | 2025-09-17 16:20:26 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:26.677013 | orchestrator | 2025-09-17 16:20:26 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:26.677036 | orchestrator | 2025-09-17 16:20:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:29.701538 | orchestrator | 2025-09-17 16:20:29 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:29.701830 | orchestrator | 2025-09-17 16:20:29 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:29.702621 | orchestrator | 2025-09-17 16:20:29 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:29.703276 | orchestrator | 2025-09-17 16:20:29 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:29.703362 | orchestrator | 2025-09-17 16:20:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:32.737330 | orchestrator | 2025-09-17 16:20:32 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:32.738437 | orchestrator | 2025-09-17 16:20:32 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:32.739809 | orchestrator | 2025-09-17 16:20:32 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:32.741189 | orchestrator | 2025-09-17 16:20:32 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:32.741394 | orchestrator | 2025-09-17 16:20:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:35.793793 | orchestrator | 2025-09-17 16:20:35 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:35.797064 | orchestrator | 2025-09-17 16:20:35 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:35.799203 | orchestrator | 2025-09-17 16:20:35 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:35.801475 | orchestrator | 2025-09-17 16:20:35 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:35.801725 | orchestrator | 2025-09-17 16:20:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:38.837808 | orchestrator | 2025-09-17 16:20:38 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:38.839074 | orchestrator | 2025-09-17 16:20:38 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:38.839103 | orchestrator | 2025-09-17 16:20:38 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:38.840980 | orchestrator | 2025-09-17 16:20:38 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:38.841001 | orchestrator | 2025-09-17 16:20:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:41.869800 | orchestrator | 2025-09-17 16:20:41 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:41.870195 | orchestrator | 2025-09-17 16:20:41 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:41.870721 | orchestrator | 2025-09-17 16:20:41 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state STARTED 2025-09-17 16:20:41.871589 | orchestrator | 2025-09-17 16:20:41 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:41.871613 | orchestrator | 2025-09-17 16:20:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:44.910551 | orchestrator | 2025-09-17 16:20:44 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:44.912635 | orchestrator | 2025-09-17 16:20:44 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:44.914089 | orchestrator | 2025-09-17 16:20:44 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:20:44.917205 | orchestrator | 2025-09-17 16:20:44 | INFO  | Task 8509fba7-c306-4964-a9b0-e5447e965a17 is in state SUCCESS 2025-09-17 16:20:44.918725 | orchestrator | 2025-09-17 16:20:44.918758 | orchestrator | 2025-09-17 16:20:44.918769 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:20:44.918781 | orchestrator | 2025-09-17 16:20:44.918793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:20:44.918803 | orchestrator | Wednesday 17 September 2025 16:16:14 +0000 (0:00:00.340) 0:00:00.340 *** 2025-09-17 16:20:44.918837 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:44.918849 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:44.918860 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:44.918870 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:20:44.918881 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:20:44.918892 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:20:44.918902 | orchestrator | 2025-09-17 16:20:44.918913 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:20:44.918924 | orchestrator | Wednesday 17 September 2025 16:16:15 +0000 (0:00:00.863) 0:00:01.204 *** 2025-09-17 16:20:44.918935 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-17 16:20:44.918946 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-17 16:20:44.918957 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-17 16:20:44.918967 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-17 16:20:44.918978 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-17 16:20:44.918989 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-17 16:20:44.918999 | orchestrator | 2025-09-17 16:20:44.919010 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-17 16:20:44.919021 | orchestrator | 2025-09-17 16:20:44.919043 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 16:20:44.919055 | orchestrator | Wednesday 17 September 2025 16:16:16 +0000 (0:00:00.550) 0:00:01.754 *** 2025-09-17 16:20:44.919067 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:20:44.919078 | orchestrator | 2025-09-17 16:20:44.919089 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-17 16:20:44.919100 | orchestrator | Wednesday 17 September 2025 16:16:17 +0000 (0:00:00.882) 0:00:02.637 *** 2025-09-17 16:20:44.919110 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:44.919121 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:44.919132 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:44.919142 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:20:44.919153 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:20:44.919164 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:20:44.919175 | orchestrator | 2025-09-17 16:20:44.919186 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-17 16:20:44.919938 | orchestrator | Wednesday 17 September 2025 16:16:18 +0000 (0:00:01.075) 0:00:03.713 *** 2025-09-17 16:20:44.919957 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:44.919968 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:44.919978 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:44.919989 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:20:44.919999 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:20:44.920009 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:20:44.920020 | orchestrator | 2025-09-17 16:20:44.920031 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-17 16:20:44.920042 | orchestrator | Wednesday 17 September 2025 16:16:19 +0000 (0:00:00.998) 0:00:04.711 *** 2025-09-17 16:20:44.920052 | orchestrator | ok: [testbed-node-0] => { 2025-09-17 16:20:44.920063 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920074 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920084 | orchestrator | } 2025-09-17 16:20:44.920095 | orchestrator | ok: [testbed-node-1] => { 2025-09-17 16:20:44.920105 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920116 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920126 | orchestrator | } 2025-09-17 16:20:44.920136 | orchestrator | ok: [testbed-node-2] => { 2025-09-17 16:20:44.920147 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920157 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920168 | orchestrator | } 2025-09-17 16:20:44.920178 | orchestrator | ok: [testbed-node-3] => { 2025-09-17 16:20:44.920188 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920213 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920224 | orchestrator | } 2025-09-17 16:20:44.920235 | orchestrator | ok: [testbed-node-4] => { 2025-09-17 16:20:44.920245 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920255 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920266 | orchestrator | } 2025-09-17 16:20:44.920276 | orchestrator | ok: [testbed-node-5] => { 2025-09-17 16:20:44.920286 | orchestrator |  "changed": false, 2025-09-17 16:20:44.920297 | orchestrator |  "msg": "All assertions passed" 2025-09-17 16:20:44.920308 | orchestrator | } 2025-09-17 16:20:44.920318 | orchestrator | 2025-09-17 16:20:44.920352 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-17 16:20:44.920364 | orchestrator | Wednesday 17 September 2025 16:16:19 +0000 (0:00:00.602) 0:00:05.314 *** 2025-09-17 16:20:44.920375 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.920385 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.920395 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.920406 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.920416 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.920426 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.920437 | orchestrator | 2025-09-17 16:20:44.920447 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-17 16:20:44.920458 | orchestrator | Wednesday 17 September 2025 16:16:20 +0000 (0:00:00.514) 0:00:05.829 *** 2025-09-17 16:20:44.920469 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-17 16:20:44.920480 | orchestrator | 2025-09-17 16:20:44.920490 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-17 16:20:44.920501 | orchestrator | Wednesday 17 September 2025 16:16:23 +0000 (0:00:03.423) 0:00:09.252 *** 2025-09-17 16:20:44.920512 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-17 16:20:44.920523 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-17 16:20:44.920534 | orchestrator | 2025-09-17 16:20:44.920586 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-17 16:20:44.920600 | orchestrator | Wednesday 17 September 2025 16:16:30 +0000 (0:00:06.667) 0:00:15.920 *** 2025-09-17 16:20:44.920613 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:20:44.920625 | orchestrator | 2025-09-17 16:20:44.920638 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-17 16:20:44.920650 | orchestrator | Wednesday 17 September 2025 16:16:33 +0000 (0:00:03.249) 0:00:19.170 *** 2025-09-17 16:20:44.920662 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:20:44.920674 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-17 16:20:44.920686 | orchestrator | 2025-09-17 16:20:44.920698 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-17 16:20:44.920709 | orchestrator | Wednesday 17 September 2025 16:16:37 +0000 (0:00:03.954) 0:00:23.125 *** 2025-09-17 16:20:44.920722 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:20:44.920734 | orchestrator | 2025-09-17 16:20:44.920746 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-17 16:20:44.920758 | orchestrator | Wednesday 17 September 2025 16:16:41 +0000 (0:00:03.512) 0:00:26.637 *** 2025-09-17 16:20:44.920769 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-17 16:20:44.920785 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-17 16:20:44.920797 | orchestrator | 2025-09-17 16:20:44.920809 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 16:20:44.920830 | orchestrator | Wednesday 17 September 2025 16:16:49 +0000 (0:00:08.321) 0:00:34.959 *** 2025-09-17 16:20:44.920842 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.920854 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.920866 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.920885 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.920896 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.920908 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.920920 | orchestrator | 2025-09-17 16:20:44.920933 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-17 16:20:44.920944 | orchestrator | Wednesday 17 September 2025 16:16:50 +0000 (0:00:00.699) 0:00:35.659 *** 2025-09-17 16:20:44.920955 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.920965 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.920976 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.920987 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.920997 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.921007 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.921018 | orchestrator | 2025-09-17 16:20:44.921028 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-17 16:20:44.921039 | orchestrator | Wednesday 17 September 2025 16:16:52 +0000 (0:00:02.277) 0:00:37.937 *** 2025-09-17 16:20:44.921049 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:20:44.921060 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:20:44.921070 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:20:44.921081 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:20:44.921092 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:20:44.921102 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:20:44.921113 | orchestrator | 2025-09-17 16:20:44.921123 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-17 16:20:44.921134 | orchestrator | Wednesday 17 September 2025 16:16:54 +0000 (0:00:01.963) 0:00:39.901 *** 2025-09-17 16:20:44.921144 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.921155 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.921165 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.921176 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.921186 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.921196 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.921207 | orchestrator | 2025-09-17 16:20:44.921217 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-17 16:20:44.921228 | orchestrator | Wednesday 17 September 2025 16:16:57 +0000 (0:00:03.115) 0:00:43.017 *** 2025-09-17 16:20:44.921242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921429 | orchestrator | 2025-09-17 16:20:44.921440 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-17 16:20:44.921451 | orchestrator | Wednesday 17 September 2025 16:17:00 +0000 (0:00:02.879) 0:00:45.896 *** 2025-09-17 16:20:44.921462 | orchestrator | [WARNING]: Skipped 2025-09-17 16:20:44.921473 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-17 16:20:44.921483 | orchestrator | due to this access issue: 2025-09-17 16:20:44.921494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-17 16:20:44.921511 | orchestrator | a directory 2025-09-17 16:20:44.921522 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:20:44.921533 | orchestrator | 2025-09-17 16:20:44.921579 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 16:20:44.921592 | orchestrator | Wednesday 17 September 2025 16:17:01 +0000 (0:00:00.656) 0:00:46.552 *** 2025-09-17 16:20:44.921603 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:20:44.921615 | orchestrator | 2025-09-17 16:20:44.921625 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-17 16:20:44.921636 | orchestrator | Wednesday 17 September 2025 16:17:02 +0000 (0:00:01.113) 0:00:47.665 *** 2025-09-17 16:20:44.921652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.921688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.921766 | orchestrator | 2025-09-17 16:20:44.921777 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-17 16:20:44.921788 | orchestrator | Wednesday 17 September 2025 16:17:06 +0000 (0:00:04.098) 0:00:51.764 *** 2025-09-17 16:20:44.921799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.921811 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.921822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.921841 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.921882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.921896 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.921907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.921917 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.921930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.921940 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.921950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.921960 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.921969 | orchestrator | 2025-09-17 16:20:44.921979 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-17 16:20:44.921988 | orchestrator | Wednesday 17 September 2025 16:17:08 +0000 (0:00:02.587) 0:00:54.351 *** 2025-09-17 16:20:44.921998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922052 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.922100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922112 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.922131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922141 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.922151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922161 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.922170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922186 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.922196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922206 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.922215 | orchestrator | 2025-09-17 16:20:44.922225 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-17 16:20:44.922260 | orchestrator | Wednesday 17 September 2025 16:17:11 +0000 (0:00:02.906) 0:00:57.258 *** 2025-09-17 16:20:44.922271 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.922280 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.922290 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.922299 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.922308 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.922318 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.922344 | orchestrator | 2025-09-17 16:20:44.922354 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-17 16:20:44.922364 | orchestrator | Wednesday 17 September 2025 16:17:13 +0000 (0:00:02.155) 0:00:59.413 *** 2025-09-17 16:20:44.922373 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.922383 | orchestrator | 2025-09-17 16:20:44.922392 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-17 16:20:44.922402 | orchestrator | Wednesday 17 September 2025 16:17:14 +0000 (0:00:00.097) 0:00:59.511 *** 2025-09-17 16:20:44.922411 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.922420 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.922430 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.922439 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.922448 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.922457 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.922467 | orchestrator | 2025-09-17 16:20:44.922476 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-17 16:20:44.922485 | orchestrator | Wednesday 17 September 2025 16:17:14 +0000 (0:00:00.555) 0:01:00.067 *** 2025-09-17 16:20:44.922500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922510 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.922520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922536 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.922546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922556 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.922572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.922582 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.922595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922605 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.922615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922630 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.922640 | orchestrator | 2025-09-17 16:20:44.922649 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-17 16:20:44.922659 | orchestrator | Wednesday 17 September 2025 16:17:17 +0000 (0:00:03.109) 0:01:03.176 *** 2025-09-17 16:20:44.922668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922747 | orchestrator | 2025-09-17 16:20:44.922757 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-17 16:20:44.922767 | orchestrator | Wednesday 17 September 2025 16:17:21 +0000 (0:00:03.974) 0:01:07.150 *** 2025-09-17 16:20:44.922781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.922856 | orchestrator | 2025-09-17 16:20:44.922866 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-17 16:20:44.922876 | orchestrator | Wednesday 17 September 2025 16:17:28 +0000 (0:00:07.168) 0:01:14.318 *** 2025-09-17 16:20:44.922889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922904 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.922915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922924 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.922934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.922944 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.922954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.922999 | orchestrator | 2025-09-17 16:20:44.923009 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-17 16:20:44.923019 | orchestrator | Wednesday 17 September 2025 16:17:31 +0000 (0:00:03.085) 0:01:17.404 *** 2025-09-17 16:20:44.923029 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923038 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923047 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923057 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:20:44.923066 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:44.923075 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:20:44.923085 | orchestrator | 2025-09-17 16:20:44.923094 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-17 16:20:44.923104 | orchestrator | Wednesday 17 September 2025 16:17:35 +0000 (0:00:03.118) 0:01:20.522 *** 2025-09-17 16:20:44.923113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.923123 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.923143 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.923173 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.923200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.923210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.923220 | orchestrator | 2025-09-17 16:20:44.923230 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-17 16:20:44.923239 | orchestrator | Wednesday 17 September 2025 16:17:39 +0000 (0:00:04.011) 0:01:24.533 *** 2025-09-17 16:20:44.923249 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923258 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923268 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923277 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923286 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923295 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923304 | orchestrator | 2025-09-17 16:20:44.923314 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-17 16:20:44.923323 | orchestrator | Wednesday 17 September 2025 16:17:41 +0000 (0:00:02.905) 0:01:27.439 *** 2025-09-17 16:20:44.923346 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923356 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923365 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923375 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923384 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923399 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923408 | orchestrator | 2025-09-17 16:20:44.923418 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-17 16:20:44.923427 | orchestrator | Wednesday 17 September 2025 16:17:43 +0000 (0:00:01.950) 0:01:29.390 *** 2025-09-17 16:20:44.923437 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923446 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923455 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923470 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923480 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923489 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923498 | orchestrator | 2025-09-17 16:20:44.923508 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-17 16:20:44.923517 | orchestrator | Wednesday 17 September 2025 16:17:45 +0000 (0:00:01.978) 0:01:31.369 *** 2025-09-17 16:20:44.923527 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923536 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923545 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923555 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923564 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923573 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923582 | orchestrator | 2025-09-17 16:20:44.923592 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-17 16:20:44.923601 | orchestrator | Wednesday 17 September 2025 16:17:48 +0000 (0:00:02.891) 0:01:34.261 *** 2025-09-17 16:20:44.923611 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923620 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923629 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923638 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923648 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923657 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923666 | orchestrator | 2025-09-17 16:20:44.923676 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-17 16:20:44.923685 | orchestrator | Wednesday 17 September 2025 16:17:50 +0000 (0:00:01.854) 0:01:36.115 *** 2025-09-17 16:20:44.923695 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923708 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923717 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923727 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923736 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923745 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923754 | orchestrator | 2025-09-17 16:20:44.923764 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-17 16:20:44.923773 | orchestrator | Wednesday 17 September 2025 16:17:52 +0000 (0:00:02.271) 0:01:38.387 *** 2025-09-17 16:20:44.923783 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923792 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923802 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923811 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.923820 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923830 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923839 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923849 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923858 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923868 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.923877 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-17 16:20:44.923887 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.923901 | orchestrator | 2025-09-17 16:20:44.923910 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-17 16:20:44.923920 | orchestrator | Wednesday 17 September 2025 16:17:54 +0000 (0:00:01.887) 0:01:40.275 *** 2025-09-17 16:20:44.923930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.923940 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.923955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.923965 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.923975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.923985 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.923998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.924008 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.924034 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.924054 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924063 | orchestrator | 2025-09-17 16:20:44.924073 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-17 16:20:44.924082 | orchestrator | Wednesday 17 September 2025 16:17:56 +0000 (0:00:02.037) 0:01:42.312 *** 2025-09-17 16:20:44.924097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.924107 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.924131 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.924156 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.924175 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.924195 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.924221 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924230 | orchestrator | 2025-09-17 16:20:44.924239 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-17 16:20:44.924249 | orchestrator | Wednesday 17 September 2025 16:17:58 +0000 (0:00:01.896) 0:01:44.209 *** 2025-09-17 16:20:44.924258 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924268 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924277 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924286 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924300 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924315 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924324 | orchestrator | 2025-09-17 16:20:44.924347 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-17 16:20:44.924356 | orchestrator | Wednesday 17 September 2025 16:18:00 +0000 (0:00:01.819) 0:01:46.028 *** 2025-09-17 16:20:44.924366 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924375 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924385 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924394 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:20:44.924403 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:20:44.924413 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:20:44.924422 | orchestrator | 2025-09-17 16:20:44.924431 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-09-17 16:20:44.924441 | orchestrator | Wednesday 17 September 2025 16:18:03 +0000 (0:00:03.325) 0:01:49.354 *** 2025-09-17 16:20:44.924450 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924460 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924469 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924478 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924488 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924497 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924506 | orchestrator | 2025-09-17 16:20:44.924516 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-17 16:20:44.924526 | orchestrator | Wednesday 17 September 2025 16:18:06 +0000 (0:00:02.748) 0:01:52.102 *** 2025-09-17 16:20:44.924535 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924544 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924554 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924563 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924573 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924582 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924591 | orchestrator | 2025-09-17 16:20:44.924601 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-17 16:20:44.924610 | orchestrator | Wednesday 17 September 2025 16:18:08 +0000 (0:00:02.061) 0:01:54.166 *** 2025-09-17 16:20:44.924619 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924629 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924638 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924647 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924657 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924666 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924675 | orchestrator | 2025-09-17 16:20:44.924684 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-17 16:20:44.924694 | orchestrator | Wednesday 17 September 2025 16:18:11 +0000 (0:00:03.250) 0:01:57.417 *** 2025-09-17 16:20:44.924703 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924713 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924722 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924731 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924741 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924750 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924759 | orchestrator | 2025-09-17 16:20:44.924769 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-17 16:20:44.924778 | orchestrator | Wednesday 17 September 2025 16:18:13 +0000 (0:00:01.867) 0:01:59.285 *** 2025-09-17 16:20:44.924788 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924797 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924806 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924815 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924825 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924834 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924843 | orchestrator | 2025-09-17 16:20:44.924853 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-17 16:20:44.924867 | orchestrator | Wednesday 17 September 2025 16:18:15 +0000 (0:00:01.843) 0:02:01.128 *** 2025-09-17 16:20:44.924877 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924886 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924896 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924905 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.924914 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.924923 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.924933 | orchestrator | 2025-09-17 16:20:44.924942 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-17 16:20:44.924952 | orchestrator | Wednesday 17 September 2025 16:18:18 +0000 (0:00:02.447) 0:02:03.576 *** 2025-09-17 16:20:44.924961 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.924975 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.924985 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.924994 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.925004 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.925013 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.925022 | orchestrator | 2025-09-17 16:20:44.925032 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-17 16:20:44.925041 | orchestrator | Wednesday 17 September 2025 16:18:21 +0000 (0:00:03.079) 0:02:06.655 *** 2025-09-17 16:20:44.925051 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.925060 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.925069 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.925079 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.925088 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.925097 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.925106 | orchestrator | 2025-09-17 16:20:44.925116 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-17 16:20:44.925125 | orchestrator | Wednesday 17 September 2025 16:18:23 +0000 (0:00:02.179) 0:02:08.834 *** 2025-09-17 16:20:44.925135 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925144 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.925154 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925163 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.925176 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925186 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.925196 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925205 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.925215 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925224 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.925234 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-17 16:20:44.925243 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.925252 | orchestrator | 2025-09-17 16:20:44.925262 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-17 16:20:44.925271 | orchestrator | Wednesday 17 September 2025 16:18:25 +0000 (0:00:01.809) 0:02:10.643 *** 2025-09-17 16:20:44.925281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.925296 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.925306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.925323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-17 16:20:44.925346 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.925356 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.925369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.925379 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.925389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.925404 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.925414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-17 16:20:44.925424 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.925433 | orchestrator | 2025-09-17 16:20:44.925442 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-17 16:20:44.925452 | orchestrator | Wednesday 17 September 2025 16:18:27 +0000 (0:00:02.075) 0:02:12.718 *** 2025-09-17 16:20:44.925462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.925479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.925496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-17 16:20:44.925506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.925522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.925532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-17 16:20:44.925542 | orchestrator | 2025-09-17 16:20:44.925552 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-17 16:20:44.925566 | orchestrator | Wednesday 17 September 2025 16:18:29 +0000 (0:00:02.737) 0:02:15.456 *** 2025-09-17 16:20:44.925576 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:20:44.925586 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:20:44.925595 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:20:44.925604 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:20:44.925614 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:20:44.925623 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:20:44.925632 | orchestrator | 2025-09-17 16:20:44.925642 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-17 16:20:44.925651 | orchestrator | Wednesday 17 September 2025 16:18:30 +0000 (0:00:00.649) 0:02:16.105 *** 2025-09-17 16:20:44.925661 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:44.925670 | orchestrator | 2025-09-17 16:20:44.925680 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-17 16:20:44.925689 | orchestrator | Wednesday 17 September 2025 16:18:32 +0000 (0:00:02.203) 0:02:18.309 *** 2025-09-17 16:20:44.925698 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:44.925707 | orchestrator | 2025-09-17 16:20:44.925717 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-17 16:20:44.925726 | orchestrator | Wednesday 17 September 2025 16:18:35 +0000 (0:00:02.638) 0:02:20.948 *** 2025-09-17 16:20:44.925736 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:44.925745 | orchestrator | 2025-09-17 16:20:44.925755 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925764 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:40.735) 0:03:01.683 *** 2025-09-17 16:20:44.925779 | orchestrator | 2025-09-17 16:20:44.925793 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925803 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:00.121) 0:03:01.804 *** 2025-09-17 16:20:44.925812 | orchestrator | 2025-09-17 16:20:44.925822 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925831 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:00.069) 0:03:01.874 *** 2025-09-17 16:20:44.925840 | orchestrator | 2025-09-17 16:20:44.925850 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925859 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:00.146) 0:03:02.020 *** 2025-09-17 16:20:44.925868 | orchestrator | 2025-09-17 16:20:44.925878 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925887 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:00.381) 0:03:02.402 *** 2025-09-17 16:20:44.925896 | orchestrator | 2025-09-17 16:20:44.925906 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-17 16:20:44.925915 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:00.069) 0:03:02.471 *** 2025-09-17 16:20:44.925924 | orchestrator | 2025-09-17 16:20:44.925934 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-17 16:20:44.925943 | orchestrator | Wednesday 17 September 2025 16:19:17 +0000 (0:00:00.075) 0:03:02.547 *** 2025-09-17 16:20:44.925953 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:20:44.925962 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:20:44.925971 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:20:44.925981 | orchestrator | 2025-09-17 16:20:44.925990 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-17 16:20:44.926000 | orchestrator | Wednesday 17 September 2025 16:19:47 +0000 (0:00:30.626) 0:03:33.174 *** 2025-09-17 16:20:44.926009 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:20:44.926042 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:20:44.926052 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:20:44.926061 | orchestrator | 2025-09-17 16:20:44.926071 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:20:44.926081 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-17 16:20:44.926090 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-17 16:20:44.926100 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-17 16:20:44.926110 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-17 16:20:44.926119 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-17 16:20:44.926129 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-17 16:20:44.926138 | orchestrator | 2025-09-17 16:20:44.926148 | orchestrator | 2025-09-17 16:20:44.926157 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:20:44.926167 | orchestrator | Wednesday 17 September 2025 16:20:43 +0000 (0:00:55.566) 0:04:28.740 *** 2025-09-17 16:20:44.926176 | orchestrator | =============================================================================== 2025-09-17 16:20:44.926186 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.57s 2025-09-17 16:20:44.926195 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.74s 2025-09-17 16:20:44.926209 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.63s 2025-09-17 16:20:44.926219 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.32s 2025-09-17 16:20:44.926234 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.17s 2025-09-17 16:20:44.926244 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.67s 2025-09-17 16:20:44.926253 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.10s 2025-09-17 16:20:44.926263 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.01s 2025-09-17 16:20:44.926272 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.97s 2025-09-17 16:20:44.926282 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.95s 2025-09-17 16:20:44.926291 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.51s 2025-09-17 16:20:44.926300 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.42s 2025-09-17 16:20:44.926310 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.33s 2025-09-17 16:20:44.926319 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.25s 2025-09-17 16:20:44.926348 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.25s 2025-09-17 16:20:44.926358 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.12s 2025-09-17 16:20:44.926368 | orchestrator | Setting sysctl values --------------------------------------------------- 3.12s 2025-09-17 16:20:44.926377 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.11s 2025-09-17 16:20:44.926391 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.09s 2025-09-17 16:20:44.926400 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.08s 2025-09-17 16:20:44.926409 | orchestrator | 2025-09-17 16:20:44 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:44.926419 | orchestrator | 2025-09-17 16:20:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:47.942755 | orchestrator | 2025-09-17 16:20:47 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:47.943319 | orchestrator | 2025-09-17 16:20:47 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:47.944394 | orchestrator | 2025-09-17 16:20:47 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:20:47.944964 | orchestrator | 2025-09-17 16:20:47 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:47.945409 | orchestrator | 2025-09-17 16:20:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:50.966288 | orchestrator | 2025-09-17 16:20:50 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:50.966549 | orchestrator | 2025-09-17 16:20:50 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:50.967163 | orchestrator | 2025-09-17 16:20:50 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:20:50.967746 | orchestrator | 2025-09-17 16:20:50 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:50.967861 | orchestrator | 2025-09-17 16:20:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:54.028870 | orchestrator | 2025-09-17 16:20:54 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:54.028953 | orchestrator | 2025-09-17 16:20:54 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:54.028967 | orchestrator | 2025-09-17 16:20:54 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:20:54.029000 | orchestrator | 2025-09-17 16:20:54 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:54.029012 | orchestrator | 2025-09-17 16:20:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:20:57.057668 | orchestrator | 2025-09-17 16:20:57 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:20:57.058679 | orchestrator | 2025-09-17 16:20:57 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:20:57.059217 | orchestrator | 2025-09-17 16:20:57 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:20:57.059912 | orchestrator | 2025-09-17 16:20:57 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:20:57.059934 | orchestrator | 2025-09-17 16:20:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:00.113981 | orchestrator | 2025-09-17 16:21:00 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:00.115215 | orchestrator | 2025-09-17 16:21:00 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:00.115881 | orchestrator | 2025-09-17 16:21:00 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:00.116765 | orchestrator | 2025-09-17 16:21:00 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:00.116781 | orchestrator | 2025-09-17 16:21:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:03.149976 | orchestrator | 2025-09-17 16:21:03 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:03.150099 | orchestrator | 2025-09-17 16:21:03 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:03.150641 | orchestrator | 2025-09-17 16:21:03 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:03.151220 | orchestrator | 2025-09-17 16:21:03 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:03.151242 | orchestrator | 2025-09-17 16:21:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:06.172282 | orchestrator | 2025-09-17 16:21:06 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:06.172910 | orchestrator | 2025-09-17 16:21:06 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:06.173490 | orchestrator | 2025-09-17 16:21:06 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:06.174228 | orchestrator | 2025-09-17 16:21:06 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:06.174256 | orchestrator | 2025-09-17 16:21:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:09.208234 | orchestrator | 2025-09-17 16:21:09 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:09.208302 | orchestrator | 2025-09-17 16:21:09 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:09.208352 | orchestrator | 2025-09-17 16:21:09 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:09.208363 | orchestrator | 2025-09-17 16:21:09 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:09.208372 | orchestrator | 2025-09-17 16:21:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:12.239855 | orchestrator | 2025-09-17 16:21:12 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:12.240244 | orchestrator | 2025-09-17 16:21:12 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:12.240889 | orchestrator | 2025-09-17 16:21:12 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:12.242183 | orchestrator | 2025-09-17 16:21:12 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:12.242207 | orchestrator | 2025-09-17 16:21:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:15.275986 | orchestrator | 2025-09-17 16:21:15 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:15.278213 | orchestrator | 2025-09-17 16:21:15 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:15.279529 | orchestrator | 2025-09-17 16:21:15 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:15.281000 | orchestrator | 2025-09-17 16:21:15 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:15.281181 | orchestrator | 2025-09-17 16:21:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:18.315386 | orchestrator | 2025-09-17 16:21:18 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state STARTED 2025-09-17 16:21:18.315641 | orchestrator | 2025-09-17 16:21:18 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:18.316446 | orchestrator | 2025-09-17 16:21:18 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:18.317441 | orchestrator | 2025-09-17 16:21:18 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:18.318859 | orchestrator | 2025-09-17 16:21:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:21.346507 | orchestrator | 2025-09-17 16:21:21 | INFO  | Task de1accfd-ec38-47e6-a6f9-6449ec18e89d is in state SUCCESS 2025-09-17 16:21:21.347509 | orchestrator | 2025-09-17 16:21:21.347541 | orchestrator | 2025-09-17 16:21:21.347554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:21:21.347566 | orchestrator | 2025-09-17 16:21:21.347577 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:21:21.347652 | orchestrator | Wednesday 17 September 2025 16:18:29 +0000 (0:00:00.272) 0:00:00.272 *** 2025-09-17 16:21:21.347731 | orchestrator | ok: [testbed-manager] 2025-09-17 16:21:21.347747 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:21:21.347758 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:21:21.347768 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:21:21.347780 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:21:21.347791 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:21:21.347802 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:21:21.347813 | orchestrator | 2025-09-17 16:21:21.347824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:21:21.347835 | orchestrator | Wednesday 17 September 2025 16:18:30 +0000 (0:00:00.992) 0:00:01.264 *** 2025-09-17 16:21:21.347846 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347857 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347868 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347879 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347890 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347983 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-17 16:21:21.347995 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-17 16:21:21.348006 | orchestrator | 2025-09-17 16:21:21.348017 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-17 16:21:21.348028 | orchestrator | 2025-09-17 16:21:21.348039 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-17 16:21:21.348072 | orchestrator | Wednesday 17 September 2025 16:18:31 +0000 (0:00:00.624) 0:00:01.889 *** 2025-09-17 16:21:21.348096 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:21:21.348109 | orchestrator | 2025-09-17 16:21:21.348119 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-17 16:21:21.348130 | orchestrator | Wednesday 17 September 2025 16:18:32 +0000 (0:00:01.068) 0:00:02.957 *** 2025-09-17 16:21:21.348144 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:21:21.348159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348183 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.348563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.348574 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.348586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.348680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.348696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350104 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:21:21.350150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350186 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.350390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.350762 | orchestrator | 2025-09-17 16:21:21.350774 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-17 16:21:21.350786 | orchestrator | Wednesday 17 September 2025 16:18:34 +0000 (0:00:02.788) 0:00:05.745 *** 2025-09-17 16:21:21.350797 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:21:21.350809 | orchestrator | 2025-09-17 16:21:21.350820 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-17 16:21:21.350830 | orchestrator | Wednesday 17 September 2025 16:18:36 +0000 (0:00:01.458) 0:00:07.203 *** 2025-09-17 16:21:21.350842 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:21:21.350855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.350866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.350959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.350985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.350996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.351017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.351028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.351091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351371 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:21:21.351392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.351714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.351754 | orchestrator | 2025-09-17 16:21:21.351765 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-17 16:21:21.351777 | orchestrator | Wednesday 17 September 2025 16:18:42 +0000 (0:00:05.680) 0:00:12.884 *** 2025-09-17 16:21:21.351788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 16:21:21.351800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.351817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.351879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 16:21:21.351895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.351910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.351922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.351933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.351962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.351980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.351991 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.352035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352251 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.352271 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.352293 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.352350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352399 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.352411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352489 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.352501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352543 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.352555 | orchestrator | 2025-09-17 16:21:21.352568 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-17 16:21:21.352590 | orchestrator | Wednesday 17 September 2025 16:18:43 +0000 (0:00:01.348) 0:00:14.233 *** 2025-09-17 16:21:21.352604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-17 16:21:21.352617 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352630 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-17 16:21:21.352693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352704 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.352715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352808 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.352819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.352869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352880 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.352891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.352933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.352991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-17 16:21:21.353014 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.353036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.353048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353070 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.353081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.353121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353145 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.353156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-17 16:21:21.353178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-17 16:21:21.353200 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.353211 | orchestrator | 2025-09-17 16:21:21.353222 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-17 16:21:21.353233 | orchestrator | Wednesday 17 September 2025 16:18:45 +0000 (0:00:02.198) 0:00:16.431 *** 2025-09-17 16:21:21.353244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:21:21.353255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.353433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353497 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:21:21.353559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353645 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.353730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.353763 | orchestrator | 2025-09-17 16:21:21.353773 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-17 16:21:21.353782 | orchestrator | Wednesday 17 September 2025 16:18:51 +0000 (0:00:05.958) 0:00:22.390 *** 2025-09-17 16:21:21.353792 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:21:21.353802 | orchestrator | 2025-09-17 16:21:21.353811 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-17 16:21:21.353821 | orchestrator | Wednesday 17 September 2025 16:18:52 +0000 (0:00:00.995) 0:00:23.385 *** 2025-09-17 16:21:21.353831 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353842 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353896 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353909 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.353919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353929 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353939 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353949 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.353989 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354001 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061437, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7279253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354056 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354070 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354080 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354090 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354100 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354158 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061583, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7444057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.354168 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354194 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354213 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354254 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354266 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354276 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354334 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354353 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354408 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061419, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7267032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.354445 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354455 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354465 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354480 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354527 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354537 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354551 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354561 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354571 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354586 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354619 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354631 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354641 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354658 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061458, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7303157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.354668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354678 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354727 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354739 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354749 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354769 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354779 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354794 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354804 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354839 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354861 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354874 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354884 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354899 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354909 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354919 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354954 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354966 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061413, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7247193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.354979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.354989 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355004 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355014 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355024 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355059 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355070 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355084 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355094 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355109 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355119 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355129 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355163 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355174 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355203 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355213 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355233 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061440, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.728434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355248 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355258 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355271 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355286 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355354 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355366 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355383 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355394 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355408 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355432 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355442 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355462 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355471 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.355487 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061456, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7298539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355497 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355516 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355527 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355536 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355546 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.355556 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355566 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.355575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355585 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.355600 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355610 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355629 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355640 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061445, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7286901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355659 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355669 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355678 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.355693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-17 16:21:21.355708 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.355718 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061432, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7276144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355732 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061576, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7438753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355742 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061407, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.72405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355752 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061604, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7468908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355762 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061465, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.743494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061417, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.725051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061410, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.724413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061450, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7295177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061448, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7290213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355825 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061601, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7464685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-17 16:21:21.355833 | orchestrator | 2025-09-17 16:21:21.355841 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-17 16:21:21.355849 | orchestrator | Wednesday 17 September 2025 16:19:13 +0000 (0:00:21.321) 0:00:44.707 *** 2025-09-17 16:21:21.355857 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:21:21.355865 | orchestrator | 2025-09-17 16:21:21.355873 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-17 16:21:21.355880 | orchestrator | Wednesday 17 September 2025 16:19:14 +0000 (0:00:00.697) 0:00:45.405 *** 2025-09-17 16:21:21.355888 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.355896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.355904 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.355912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.355920 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.355928 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:21:21.355936 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.355943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.355951 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.355959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.355966 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.355974 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.355982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.355989 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.355997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356009 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.356017 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.356024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356032 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.356040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356047 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.356055 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.356063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356074 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.356082 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356090 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.356098 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.356105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356113 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.356121 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356129 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.356136 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.356144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356152 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-17 16:21:21.356159 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-17 16:21:21.356167 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-17 16:21:21.356175 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:21:21.356182 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-17 16:21:21.356190 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 16:21:21.356198 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-17 16:21:21.356205 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 16:21:21.356213 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 16:21:21.356221 | orchestrator | 2025-09-17 16:21:21.356229 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-17 16:21:21.356236 | orchestrator | Wednesday 17 September 2025 16:19:16 +0000 (0:00:02.269) 0:00:47.674 *** 2025-09-17 16:21:21.356244 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356255 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356263 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356271 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.356279 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356286 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.356294 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356315 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356324 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356332 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356340 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-17 16:21:21.356352 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356365 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-17 16:21:21.356373 | orchestrator | 2025-09-17 16:21:21.356381 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-17 16:21:21.356394 | orchestrator | Wednesday 17 September 2025 16:19:30 +0000 (0:00:13.546) 0:01:01.221 *** 2025-09-17 16:21:21.356402 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356410 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356417 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356425 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.356433 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356441 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.356448 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356456 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356464 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356472 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356479 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-17 16:21:21.356487 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356495 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-17 16:21:21.356503 | orchestrator | 2025-09-17 16:21:21.356510 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-17 16:21:21.356518 | orchestrator | Wednesday 17 September 2025 16:19:32 +0000 (0:00:02.444) 0:01:03.666 *** 2025-09-17 16:21:21.356526 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356535 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356543 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356551 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356559 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.356566 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.356579 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356587 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356595 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-17 16:21:21.356603 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356610 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356618 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-17 16:21:21.356626 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356634 | orchestrator | 2025-09-17 16:21:21.356641 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-17 16:21:21.356649 | orchestrator | Wednesday 17 September 2025 16:19:34 +0000 (0:00:01.282) 0:01:04.948 *** 2025-09-17 16:21:21.356657 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:21:21.356665 | orchestrator | 2025-09-17 16:21:21.356672 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-17 16:21:21.356680 | orchestrator | Wednesday 17 September 2025 16:19:34 +0000 (0:00:00.634) 0:01:05.583 *** 2025-09-17 16:21:21.356688 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.356696 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356703 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.356716 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.356723 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356731 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356739 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356746 | orchestrator | 2025-09-17 16:21:21.356754 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-17 16:21:21.356765 | orchestrator | Wednesday 17 September 2025 16:19:35 +0000 (0:00:00.583) 0:01:06.167 *** 2025-09-17 16:21:21.356773 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.356781 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356788 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356796 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356804 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.356811 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.356819 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.356827 | orchestrator | 2025-09-17 16:21:21.356834 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-17 16:21:21.356842 | orchestrator | Wednesday 17 September 2025 16:19:37 +0000 (0:00:01.897) 0:01:08.064 *** 2025-09-17 16:21:21.356850 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356858 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356865 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.356873 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356881 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.356889 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356896 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356904 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.356912 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356920 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.356927 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356935 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.356943 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-17 16:21:21.356951 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.356958 | orchestrator | 2025-09-17 16:21:21.356966 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-17 16:21:21.356974 | orchestrator | Wednesday 17 September 2025 16:19:38 +0000 (0:00:01.325) 0:01:09.390 *** 2025-09-17 16:21:21.356982 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.356990 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.356997 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.357005 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.357013 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.357021 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.357028 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.357036 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.357044 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.357052 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.357059 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-17 16:21:21.357072 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-17 16:21:21.357080 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.357088 | orchestrator | 2025-09-17 16:21:21.357099 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-17 16:21:21.357107 | orchestrator | Wednesday 17 September 2025 16:19:40 +0000 (0:00:01.426) 0:01:10.816 *** 2025-09-17 16:21:21.357115 | orchestrator | [WARNING]: Skipped 2025-09-17 16:21:21.357123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-17 16:21:21.357131 | orchestrator | due to this access issue: 2025-09-17 16:21:21.357139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-17 16:21:21.357146 | orchestrator | not a directory 2025-09-17 16:21:21.357154 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-17 16:21:21.357162 | orchestrator | 2025-09-17 16:21:21.357170 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-17 16:21:21.357177 | orchestrator | Wednesday 17 September 2025 16:19:41 +0000 (0:00:00.968) 0:01:11.784 *** 2025-09-17 16:21:21.357185 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.357193 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.357201 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.357208 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.357216 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.357224 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.357232 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.357239 | orchestrator | 2025-09-17 16:21:21.357247 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-17 16:21:21.357255 | orchestrator | Wednesday 17 September 2025 16:19:41 +0000 (0:00:00.729) 0:01:12.514 *** 2025-09-17 16:21:21.357263 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.357271 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:21:21.357278 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:21:21.357286 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:21:21.357294 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:21:21.357314 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:21:21.357323 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:21:21.357331 | orchestrator | 2025-09-17 16:21:21.357344 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-17 16:21:21.357352 | orchestrator | Wednesday 17 September 2025 16:19:42 +0000 (0:00:00.705) 0:01:13.220 *** 2025-09-17 16:21:21.357360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357369 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-17 16:21:21.357377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357403 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-17 16:21:21.357468 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-17 16:21:21.357502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-17 16:21:21.357648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-17 16:21:21.357673 | orchestrator | 2025-09-17 16:21:21.357681 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-17 16:21:21.357692 | orchestrator | Wednesday 17 September 2025 16:19:47 +0000 (0:00:04.969) 0:01:18.190 *** 2025-09-17 16:21:21.357700 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-17 16:21:21.357708 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:21:21.357716 | orchestrator | 2025-09-17 16:21:21.357723 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357731 | orchestrator | Wednesday 17 September 2025 16:19:48 +0000 (0:00:01.211) 0:01:19.402 *** 2025-09-17 16:21:21.357739 | orchestrator | 2025-09-17 16:21:21.357747 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357754 | orchestrator | Wednesday 17 September 2025 16:19:48 +0000 (0:00:00.064) 0:01:19.466 *** 2025-09-17 16:21:21.357766 | orchestrator | 2025-09-17 16:21:21.357774 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357782 | orchestrator | Wednesday 17 September 2025 16:19:48 +0000 (0:00:00.061) 0:01:19.528 *** 2025-09-17 16:21:21.357790 | orchestrator | 2025-09-17 16:21:21.357797 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357805 | orchestrator | Wednesday 17 September 2025 16:19:48 +0000 (0:00:00.211) 0:01:19.740 *** 2025-09-17 16:21:21.357813 | orchestrator | 2025-09-17 16:21:21.357821 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357828 | orchestrator | Wednesday 17 September 2025 16:19:49 +0000 (0:00:00.123) 0:01:19.864 *** 2025-09-17 16:21:21.357836 | orchestrator | 2025-09-17 16:21:21.357844 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357851 | orchestrator | Wednesday 17 September 2025 16:19:49 +0000 (0:00:00.063) 0:01:19.927 *** 2025-09-17 16:21:21.357859 | orchestrator | 2025-09-17 16:21:21.357867 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-17 16:21:21.357875 | orchestrator | Wednesday 17 September 2025 16:19:49 +0000 (0:00:00.067) 0:01:19.995 *** 2025-09-17 16:21:21.357882 | orchestrator | 2025-09-17 16:21:21.357890 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-17 16:21:21.357898 | orchestrator | Wednesday 17 September 2025 16:19:49 +0000 (0:00:00.083) 0:01:20.079 *** 2025-09-17 16:21:21.357906 | orchestrator | changed: [testbed-manager] 2025-09-17 16:21:21.357913 | orchestrator | 2025-09-17 16:21:21.357921 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-17 16:21:21.357929 | orchestrator | Wednesday 17 September 2025 16:20:05 +0000 (0:00:16.328) 0:01:36.407 *** 2025-09-17 16:21:21.357936 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.357944 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:21:21.357952 | orchestrator | changed: [testbed-manager] 2025-09-17 16:21:21.357960 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:21:21.357967 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.357975 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:21:21.357983 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.357990 | orchestrator | 2025-09-17 16:21:21.357998 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-17 16:21:21.358006 | orchestrator | Wednesday 17 September 2025 16:20:18 +0000 (0:00:12.483) 0:01:48.890 *** 2025-09-17 16:21:21.358014 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.358049 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.358056 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.358064 | orchestrator | 2025-09-17 16:21:21.358072 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-17 16:21:21.358080 | orchestrator | Wednesday 17 September 2025 16:20:27 +0000 (0:00:09.679) 0:01:58.570 *** 2025-09-17 16:21:21.358087 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.358095 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.358103 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.358110 | orchestrator | 2025-09-17 16:21:21.358118 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-17 16:21:21.358126 | orchestrator | Wednesday 17 September 2025 16:20:37 +0000 (0:00:09.966) 0:02:08.537 *** 2025-09-17 16:21:21.358133 | orchestrator | changed: [testbed-manager] 2025-09-17 16:21:21.358141 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.358153 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.358161 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:21:21.358169 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:21:21.358177 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:21:21.358184 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.358192 | orchestrator | 2025-09-17 16:21:21.358200 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-17 16:21:21.358212 | orchestrator | Wednesday 17 September 2025 16:20:51 +0000 (0:00:13.957) 0:02:22.494 *** 2025-09-17 16:21:21.358220 | orchestrator | changed: [testbed-manager] 2025-09-17 16:21:21.358227 | orchestrator | 2025-09-17 16:21:21.358235 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-17 16:21:21.358243 | orchestrator | Wednesday 17 September 2025 16:20:57 +0000 (0:00:06.246) 0:02:28.740 *** 2025-09-17 16:21:21.358251 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:21:21.358258 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:21:21.358266 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:21:21.358274 | orchestrator | 2025-09-17 16:21:21.358281 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-17 16:21:21.358289 | orchestrator | Wednesday 17 September 2025 16:21:04 +0000 (0:00:06.671) 0:02:35.411 *** 2025-09-17 16:21:21.358297 | orchestrator | changed: [testbed-manager] 2025-09-17 16:21:21.358413 | orchestrator | 2025-09-17 16:21:21.358435 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-17 16:21:21.358443 | orchestrator | Wednesday 17 September 2025 16:21:09 +0000 (0:00:04.942) 0:02:40.354 *** 2025-09-17 16:21:21.358451 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:21:21.358459 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:21:21.358467 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:21:21.358474 | orchestrator | 2025-09-17 16:21:21.358482 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:21:21.358490 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 16:21:21.358504 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:21:21.358512 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:21:21.358520 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:21:21.358528 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 16:21:21.358535 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 16:21:21.358543 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 16:21:21.358551 | orchestrator | 2025-09-17 16:21:21.358558 | orchestrator | 2025-09-17 16:21:21.358566 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:21:21.358574 | orchestrator | Wednesday 17 September 2025 16:21:19 +0000 (0:00:10.242) 0:02:50.597 *** 2025-09-17 16:21:21.358582 | orchestrator | =============================================================================== 2025-09-17 16:21:21.358590 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.32s 2025-09-17 16:21:21.358597 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.33s 2025-09-17 16:21:21.358605 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.96s 2025-09-17 16:21:21.358613 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.55s 2025-09-17 16:21:21.358620 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.48s 2025-09-17 16:21:21.358628 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.24s 2025-09-17 16:21:21.358636 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.97s 2025-09-17 16:21:21.358643 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.68s 2025-09-17 16:21:21.358658 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.67s 2025-09-17 16:21:21.358666 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.25s 2025-09-17 16:21:21.358673 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.96s 2025-09-17 16:21:21.358681 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.68s 2025-09-17 16:21:21.358688 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.97s 2025-09-17 16:21:21.358696 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.94s 2025-09-17 16:21:21.358704 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.79s 2025-09-17 16:21:21.358711 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.44s 2025-09-17 16:21:21.358719 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.27s 2025-09-17 16:21:21.358727 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.20s 2025-09-17 16:21:21.358744 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.90s 2025-09-17 16:21:21.358752 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.46s 2025-09-17 16:21:21.358760 | orchestrator | 2025-09-17 16:21:21 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:21.358768 | orchestrator | 2025-09-17 16:21:21 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:21.358776 | orchestrator | 2025-09-17 16:21:21 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:21.358784 | orchestrator | 2025-09-17 16:21:21 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:21.358791 | orchestrator | 2025-09-17 16:21:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:24.373979 | orchestrator | 2025-09-17 16:21:24 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:24.377056 | orchestrator | 2025-09-17 16:21:24 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:24.377086 | orchestrator | 2025-09-17 16:21:24 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:24.377502 | orchestrator | 2025-09-17 16:21:24 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:24.377523 | orchestrator | 2025-09-17 16:21:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:27.415962 | orchestrator | 2025-09-17 16:21:27 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:27.418374 | orchestrator | 2025-09-17 16:21:27 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:27.420831 | orchestrator | 2025-09-17 16:21:27 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:27.422578 | orchestrator | 2025-09-17 16:21:27 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:27.422981 | orchestrator | 2025-09-17 16:21:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:30.457468 | orchestrator | 2025-09-17 16:21:30 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:30.459445 | orchestrator | 2025-09-17 16:21:30 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:30.461081 | orchestrator | 2025-09-17 16:21:30 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:30.462712 | orchestrator | 2025-09-17 16:21:30 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:30.462827 | orchestrator | 2025-09-17 16:21:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:33.509381 | orchestrator | 2025-09-17 16:21:33 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:33.510388 | orchestrator | 2025-09-17 16:21:33 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:33.512632 | orchestrator | 2025-09-17 16:21:33 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:33.515385 | orchestrator | 2025-09-17 16:21:33 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:33.515412 | orchestrator | 2025-09-17 16:21:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:36.559284 | orchestrator | 2025-09-17 16:21:36 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:36.560845 | orchestrator | 2025-09-17 16:21:36 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:36.563111 | orchestrator | 2025-09-17 16:21:36 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:36.564474 | orchestrator | 2025-09-17 16:21:36 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:36.564497 | orchestrator | 2025-09-17 16:21:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:39.610207 | orchestrator | 2025-09-17 16:21:39 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:39.612414 | orchestrator | 2025-09-17 16:21:39 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:39.614384 | orchestrator | 2025-09-17 16:21:39 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:39.616486 | orchestrator | 2025-09-17 16:21:39 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:39.616516 | orchestrator | 2025-09-17 16:21:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:42.662737 | orchestrator | 2025-09-17 16:21:42 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:42.663533 | orchestrator | 2025-09-17 16:21:42 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:42.664995 | orchestrator | 2025-09-17 16:21:42 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:42.666229 | orchestrator | 2025-09-17 16:21:42 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:42.666255 | orchestrator | 2025-09-17 16:21:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:45.708018 | orchestrator | 2025-09-17 16:21:45 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:45.709228 | orchestrator | 2025-09-17 16:21:45 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:45.710709 | orchestrator | 2025-09-17 16:21:45 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:45.712209 | orchestrator | 2025-09-17 16:21:45 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:45.712278 | orchestrator | 2025-09-17 16:21:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:48.754069 | orchestrator | 2025-09-17 16:21:48 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:48.755381 | orchestrator | 2025-09-17 16:21:48 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:48.757020 | orchestrator | 2025-09-17 16:21:48 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:48.759025 | orchestrator | 2025-09-17 16:21:48 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:48.759051 | orchestrator | 2025-09-17 16:21:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:51.803392 | orchestrator | 2025-09-17 16:21:51 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:51.805201 | orchestrator | 2025-09-17 16:21:51 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:51.807007 | orchestrator | 2025-09-17 16:21:51 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:51.808678 | orchestrator | 2025-09-17 16:21:51 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:51.809574 | orchestrator | 2025-09-17 16:21:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:54.845667 | orchestrator | 2025-09-17 16:21:54 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:54.845904 | orchestrator | 2025-09-17 16:21:54 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:54.846790 | orchestrator | 2025-09-17 16:21:54 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:54.847560 | orchestrator | 2025-09-17 16:21:54 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:54.847587 | orchestrator | 2025-09-17 16:21:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:21:57.876501 | orchestrator | 2025-09-17 16:21:57 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:21:57.878587 | orchestrator | 2025-09-17 16:21:57 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:21:57.882456 | orchestrator | 2025-09-17 16:21:57 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:21:57.885851 | orchestrator | 2025-09-17 16:21:57 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:21:57.886762 | orchestrator | 2025-09-17 16:21:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:00.922374 | orchestrator | 2025-09-17 16:22:00 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:00.924430 | orchestrator | 2025-09-17 16:22:00 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:00.926639 | orchestrator | 2025-09-17 16:22:00 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:00.928990 | orchestrator | 2025-09-17 16:22:00 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:00.929406 | orchestrator | 2025-09-17 16:22:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:03.966545 | orchestrator | 2025-09-17 16:22:03 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:03.967054 | orchestrator | 2025-09-17 16:22:03 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:03.967797 | orchestrator | 2025-09-17 16:22:03 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:03.968562 | orchestrator | 2025-09-17 16:22:03 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:03.968583 | orchestrator | 2025-09-17 16:22:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:07.010591 | orchestrator | 2025-09-17 16:22:07 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:07.011185 | orchestrator | 2025-09-17 16:22:07 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:07.015164 | orchestrator | 2025-09-17 16:22:07 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:07.018489 | orchestrator | 2025-09-17 16:22:07 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:07.018559 | orchestrator | 2025-09-17 16:22:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:10.053260 | orchestrator | 2025-09-17 16:22:10 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:10.053385 | orchestrator | 2025-09-17 16:22:10 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:10.053413 | orchestrator | 2025-09-17 16:22:10 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:10.053425 | orchestrator | 2025-09-17 16:22:10 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:10.053436 | orchestrator | 2025-09-17 16:22:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:13.094507 | orchestrator | 2025-09-17 16:22:13 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:13.096636 | orchestrator | 2025-09-17 16:22:13 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:13.100389 | orchestrator | 2025-09-17 16:22:13 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:13.102163 | orchestrator | 2025-09-17 16:22:13 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:13.102201 | orchestrator | 2025-09-17 16:22:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:16.138106 | orchestrator | 2025-09-17 16:22:16 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:16.139987 | orchestrator | 2025-09-17 16:22:16 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:16.142670 | orchestrator | 2025-09-17 16:22:16 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:16.144364 | orchestrator | 2025-09-17 16:22:16 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:16.144508 | orchestrator | 2025-09-17 16:22:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:19.185096 | orchestrator | 2025-09-17 16:22:19 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:19.186977 | orchestrator | 2025-09-17 16:22:19 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:19.189192 | orchestrator | 2025-09-17 16:22:19 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:19.191491 | orchestrator | 2025-09-17 16:22:19 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:19.191642 | orchestrator | 2025-09-17 16:22:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:22.237625 | orchestrator | 2025-09-17 16:22:22 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:22.239589 | orchestrator | 2025-09-17 16:22:22 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:22.241857 | orchestrator | 2025-09-17 16:22:22 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state STARTED 2025-09-17 16:22:22.243693 | orchestrator | 2025-09-17 16:22:22 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:22.244018 | orchestrator | 2025-09-17 16:22:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:25.341493 | orchestrator | 2025-09-17 16:22:25 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:25.343174 | orchestrator | 2025-09-17 16:22:25 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:25.346547 | orchestrator | 2025-09-17 16:22:25.346574 | orchestrator | 2025-09-17 16:22:25 | INFO  | Task 32f6baa8-dcf7-412c-b1a4-26988db41ed7 is in state SUCCESS 2025-09-17 16:22:25.348833 | orchestrator | 2025-09-17 16:22:25.348867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:22:25.348879 | orchestrator | 2025-09-17 16:22:25.348891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:22:25.348902 | orchestrator | Wednesday 17 September 2025 16:19:42 +0000 (0:00:00.334) 0:00:00.334 *** 2025-09-17 16:22:25.348913 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:22:25.348925 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:22:25.349099 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:22:25.349115 | orchestrator | 2025-09-17 16:22:25.349126 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:22:25.349138 | orchestrator | Wednesday 17 September 2025 16:19:43 +0000 (0:00:00.313) 0:00:00.648 *** 2025-09-17 16:22:25.349149 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-17 16:22:25.349162 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-17 16:22:25.349173 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-17 16:22:25.349186 | orchestrator | 2025-09-17 16:22:25.349198 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-17 16:22:25.349209 | orchestrator | 2025-09-17 16:22:25.349221 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 16:22:25.349233 | orchestrator | Wednesday 17 September 2025 16:19:43 +0000 (0:00:00.718) 0:00:01.367 *** 2025-09-17 16:22:25.349244 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:22:25.349300 | orchestrator | 2025-09-17 16:22:25.349312 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-17 16:22:25.349341 | orchestrator | Wednesday 17 September 2025 16:19:44 +0000 (0:00:01.013) 0:00:02.380 *** 2025-09-17 16:22:25.349353 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-17 16:22:25.349363 | orchestrator | 2025-09-17 16:22:25.349374 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-17 16:22:25.349385 | orchestrator | Wednesday 17 September 2025 16:19:49 +0000 (0:00:04.225) 0:00:06.606 *** 2025-09-17 16:22:25.349395 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-17 16:22:25.349406 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-17 16:22:25.349417 | orchestrator | 2025-09-17 16:22:25.349427 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-17 16:22:25.349438 | orchestrator | Wednesday 17 September 2025 16:19:55 +0000 (0:00:06.855) 0:00:13.461 *** 2025-09-17 16:22:25.349448 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:22:25.349460 | orchestrator | 2025-09-17 16:22:25.349471 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-17 16:22:25.349482 | orchestrator | Wednesday 17 September 2025 16:19:59 +0000 (0:00:03.415) 0:00:16.877 *** 2025-09-17 16:22:25.349492 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:22:25.349503 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-17 16:22:25.349513 | orchestrator | 2025-09-17 16:22:25.349524 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-17 16:22:25.349534 | orchestrator | Wednesday 17 September 2025 16:20:02 +0000 (0:00:03.613) 0:00:20.490 *** 2025-09-17 16:22:25.349569 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:22:25.349580 | orchestrator | 2025-09-17 16:22:25.349591 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-17 16:22:25.349601 | orchestrator | Wednesday 17 September 2025 16:20:06 +0000 (0:00:03.313) 0:00:23.804 *** 2025-09-17 16:22:25.349612 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-17 16:22:25.349622 | orchestrator | 2025-09-17 16:22:25.349633 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-17 16:22:25.349644 | orchestrator | Wednesday 17 September 2025 16:20:10 +0000 (0:00:04.012) 0:00:27.816 *** 2025-09-17 16:22:25.349674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.349698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.349721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.349736 | orchestrator | 2025-09-17 16:22:25.349748 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 16:22:25.349760 | orchestrator | Wednesday 17 September 2025 16:20:13 +0000 (0:00:03.089) 0:00:30.905 *** 2025-09-17 16:22:25.349780 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:22:25.349793 | orchestrator | 2025-09-17 16:22:25.349806 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-17 16:22:25.349818 | orchestrator | Wednesday 17 September 2025 16:20:13 +0000 (0:00:00.541) 0:00:31.446 *** 2025-09-17 16:22:25.349830 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.349842 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:22:25.349854 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:22:25.349866 | orchestrator | 2025-09-17 16:22:25.349878 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-17 16:22:25.349890 | orchestrator | Wednesday 17 September 2025 16:20:16 +0000 (0:00:03.056) 0:00:34.503 *** 2025-09-17 16:22:25.349902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.349916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.349928 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.349940 | orchestrator | 2025-09-17 16:22:25.349951 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-17 16:22:25.349964 | orchestrator | Wednesday 17 September 2025 16:20:18 +0000 (0:00:01.548) 0:00:36.051 *** 2025-09-17 16:22:25.349976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.349993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.350005 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:22:25.350072 | orchestrator | 2025-09-17 16:22:25.350086 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-17 16:22:25.350106 | orchestrator | Wednesday 17 September 2025 16:20:20 +0000 (0:00:01.462) 0:00:37.514 *** 2025-09-17 16:22:25.350119 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:22:25.350130 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:22:25.350140 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:22:25.350151 | orchestrator | 2025-09-17 16:22:25.350162 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-17 16:22:25.350172 | orchestrator | Wednesday 17 September 2025 16:20:20 +0000 (0:00:00.960) 0:00:38.475 *** 2025-09-17 16:22:25.350183 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.350193 | orchestrator | 2025-09-17 16:22:25.350204 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-17 16:22:25.350214 | orchestrator | Wednesday 17 September 2025 16:20:21 +0000 (0:00:00.145) 0:00:38.620 *** 2025-09-17 16:22:25.350225 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.350236 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.350246 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.350275 | orchestrator | 2025-09-17 16:22:25.350287 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 16:22:25.350297 | orchestrator | Wednesday 17 September 2025 16:20:21 +0000 (0:00:00.282) 0:00:38.903 *** 2025-09-17 16:22:25.350308 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:22:25.350318 | orchestrator | 2025-09-17 16:22:25.350329 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-17 16:22:25.350340 | orchestrator | Wednesday 17 September 2025 16:20:21 +0000 (0:00:00.464) 0:00:39.368 *** 2025-09-17 16:22:25.350360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350412 | orchestrator | 2025-09-17 16:22:25.350423 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-17 16:22:25.350433 | orchestrator | Wednesday 17 September 2025 16:20:25 +0000 (0:00:03.682) 0:00:43.050 *** 2025-09-17 16:22:25.350460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350480 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.350492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350504 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.350524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350544 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.350555 | orchestrator | 2025-09-17 16:22:25.350566 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-17 16:22:25.350577 | orchestrator | Wednesday 17 September 2025 16:20:28 +0000 (0:00:02.527) 0:00:45.578 *** 2025-09-17 16:22:25.350594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350607 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.350625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350651 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.350667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-17 16:22:25.350679 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.350690 | orchestrator | 2025-09-17 16:22:25.350700 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-17 16:22:25.350711 | orchestrator | Wednesday 17 September 2025 16:20:31 +0000 (0:00:03.296) 0:00:48.875 *** 2025-09-17 16:22:25.350826 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.350838 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.350849 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.350860 | orchestrator | 2025-09-17 16:22:25.350870 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-17 16:22:25.350881 | orchestrator | Wednesday 17 September 2025 16:20:34 +0000 (0:00:02.771) 0:00:51.646 *** 2025-09-17 16:22:25.350904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.350957 | orchestrator | 2025-09-17 16:22:25.350968 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-17 16:22:25.350985 | orchestrator | Wednesday 17 September 2025 16:20:37 +0000 (0:00:03.257) 0:00:54.903 *** 2025-09-17 16:22:25.351070 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:22:25.351086 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351098 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:22:25.351110 | orchestrator | 2025-09-17 16:22:25.351122 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-17 16:22:25.351142 | orchestrator | Wednesday 17 September 2025 16:20:45 +0000 (0:00:08.547) 0:01:03.451 *** 2025-09-17 16:22:25.351153 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351165 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351176 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351188 | orchestrator | 2025-09-17 16:22:25.351199 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-17 16:22:25.351211 | orchestrator | Wednesday 17 September 2025 16:20:49 +0000 (0:00:03.295) 0:01:06.746 *** 2025-09-17 16:22:25.351223 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351234 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351245 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351277 | orchestrator | 2025-09-17 16:22:25.351289 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-17 16:22:25.351300 | orchestrator | Wednesday 17 September 2025 16:20:53 +0000 (0:00:04.170) 0:01:10.917 *** 2025-09-17 16:22:25.351310 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351321 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351331 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351342 | orchestrator | 2025-09-17 16:22:25.351352 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-17 16:22:25.351363 | orchestrator | Wednesday 17 September 2025 16:20:57 +0000 (0:00:03.609) 0:01:14.526 *** 2025-09-17 16:22:25.351374 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351384 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351395 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351405 | orchestrator | 2025-09-17 16:22:25.351416 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-17 16:22:25.351433 | orchestrator | Wednesday 17 September 2025 16:21:02 +0000 (0:00:04.983) 0:01:19.510 *** 2025-09-17 16:22:25.351444 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351454 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351465 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351475 | orchestrator | 2025-09-17 16:22:25.351486 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-17 16:22:25.351496 | orchestrator | Wednesday 17 September 2025 16:21:02 +0000 (0:00:00.311) 0:01:19.821 *** 2025-09-17 16:22:25.351507 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 16:22:25.351518 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351528 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 16:22:25.351539 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351550 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-17 16:22:25.351560 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351571 | orchestrator | 2025-09-17 16:22:25.351581 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-17 16:22:25.351592 | orchestrator | Wednesday 17 September 2025 16:21:05 +0000 (0:00:03.435) 0:01:23.256 *** 2025-09-17 16:22:25.351604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.351640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.351655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-17 16:22:25.351674 | orchestrator | 2025-09-17 16:22:25.351685 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-17 16:22:25.351696 | orchestrator | Wednesday 17 September 2025 16:21:09 +0000 (0:00:03.818) 0:01:27.075 *** 2025-09-17 16:22:25.351709 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:22:25.351721 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:22:25.351733 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:22:25.351745 | orchestrator | 2025-09-17 16:22:25.351757 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-17 16:22:25.351769 | orchestrator | Wednesday 17 September 2025 16:21:09 +0000 (0:00:00.370) 0:01:27.445 *** 2025-09-17 16:22:25.351781 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351793 | orchestrator | 2025-09-17 16:22:25.351805 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-17 16:22:25.351817 | orchestrator | Wednesday 17 September 2025 16:21:12 +0000 (0:00:02.296) 0:01:29.742 *** 2025-09-17 16:22:25.351830 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351842 | orchestrator | 2025-09-17 16:22:25.351854 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-17 16:22:25.351866 | orchestrator | Wednesday 17 September 2025 16:21:14 +0000 (0:00:02.591) 0:01:32.334 *** 2025-09-17 16:22:25.351879 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351890 | orchestrator | 2025-09-17 16:22:25.351900 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-17 16:22:25.351918 | orchestrator | Wednesday 17 September 2025 16:21:17 +0000 (0:00:02.279) 0:01:34.613 *** 2025-09-17 16:22:25.351929 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351939 | orchestrator | 2025-09-17 16:22:25.351950 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-17 16:22:25.351960 | orchestrator | Wednesday 17 September 2025 16:21:47 +0000 (0:00:30.369) 0:02:04.982 *** 2025-09-17 16:22:25.351971 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.351982 | orchestrator | 2025-09-17 16:22:25.351992 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 16:22:25.352003 | orchestrator | Wednesday 17 September 2025 16:21:49 +0000 (0:00:02.269) 0:02:07.252 *** 2025-09-17 16:22:25.352013 | orchestrator | 2025-09-17 16:22:25.352024 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 16:22:25.352035 | orchestrator | Wednesday 17 September 2025 16:21:49 +0000 (0:00:00.224) 0:02:07.477 *** 2025-09-17 16:22:25.352045 | orchestrator | 2025-09-17 16:22:25.352056 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-17 16:22:25.352066 | orchestrator | Wednesday 17 September 2025 16:21:50 +0000 (0:00:00.065) 0:02:07.542 *** 2025-09-17 16:22:25.352077 | orchestrator | 2025-09-17 16:22:25.352088 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-17 16:22:25.352098 | orchestrator | Wednesday 17 September 2025 16:21:50 +0000 (0:00:00.066) 0:02:07.608 *** 2025-09-17 16:22:25.352109 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:22:25.352120 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:22:25.352130 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:22:25.352141 | orchestrator | 2025-09-17 16:22:25.352151 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:22:25.352174 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-17 16:22:25.352186 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:22:25.352197 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:22:25.352208 | orchestrator | 2025-09-17 16:22:25.352218 | orchestrator | 2025-09-17 16:22:25.352229 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:22:25.352240 | orchestrator | Wednesday 17 September 2025 16:22:22 +0000 (0:00:32.186) 0:02:39.795 *** 2025-09-17 16:22:25.352251 | orchestrator | =============================================================================== 2025-09-17 16:22:25.352310 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.19s 2025-09-17 16:22:25.352323 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.37s 2025-09-17 16:22:25.352335 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.55s 2025-09-17 16:22:25.352346 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.86s 2025-09-17 16:22:25.352358 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.98s 2025-09-17 16:22:25.352369 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.23s 2025-09-17 16:22:25.352381 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.17s 2025-09-17 16:22:25.352392 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.01s 2025-09-17 16:22:25.352403 | orchestrator | glance : Check glance containers ---------------------------------------- 3.82s 2025-09-17 16:22:25.352415 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.68s 2025-09-17 16:22:25.352426 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.61s 2025-09-17 16:22:25.352438 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.61s 2025-09-17 16:22:25.352449 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.44s 2025-09-17 16:22:25.352461 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.42s 2025-09-17 16:22:25.352472 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.31s 2025-09-17 16:22:25.352484 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.30s 2025-09-17 16:22:25.352495 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.30s 2025-09-17 16:22:25.352507 | orchestrator | glance : Copying over config.json files for services -------------------- 3.26s 2025-09-17 16:22:25.352518 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.09s 2025-09-17 16:22:25.352530 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.06s 2025-09-17 16:22:25.352541 | orchestrator | 2025-09-17 16:22:25 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:25.352553 | orchestrator | 2025-09-17 16:22:25 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:25.352565 | orchestrator | 2025-09-17 16:22:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:28.399945 | orchestrator | 2025-09-17 16:22:28 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:28.400861 | orchestrator | 2025-09-17 16:22:28 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:28.401698 | orchestrator | 2025-09-17 16:22:28 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:28.402834 | orchestrator | 2025-09-17 16:22:28 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:28.402887 | orchestrator | 2025-09-17 16:22:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:31.444980 | orchestrator | 2025-09-17 16:22:31 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:31.447062 | orchestrator | 2025-09-17 16:22:31 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:31.448849 | orchestrator | 2025-09-17 16:22:31 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:31.450825 | orchestrator | 2025-09-17 16:22:31 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:31.450862 | orchestrator | 2025-09-17 16:22:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:34.497333 | orchestrator | 2025-09-17 16:22:34 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:34.497758 | orchestrator | 2025-09-17 16:22:34 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:34.498774 | orchestrator | 2025-09-17 16:22:34 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:34.500550 | orchestrator | 2025-09-17 16:22:34 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:34.500574 | orchestrator | 2025-09-17 16:22:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:37.550082 | orchestrator | 2025-09-17 16:22:37 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:37.551626 | orchestrator | 2025-09-17 16:22:37 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:37.553092 | orchestrator | 2025-09-17 16:22:37 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:37.554938 | orchestrator | 2025-09-17 16:22:37 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:37.554969 | orchestrator | 2025-09-17 16:22:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:40.599392 | orchestrator | 2025-09-17 16:22:40 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:40.601315 | orchestrator | 2025-09-17 16:22:40 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:40.603599 | orchestrator | 2025-09-17 16:22:40 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:40.605348 | orchestrator | 2025-09-17 16:22:40 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:40.605539 | orchestrator | 2025-09-17 16:22:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:43.646364 | orchestrator | 2025-09-17 16:22:43 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:43.648420 | orchestrator | 2025-09-17 16:22:43 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:43.650888 | orchestrator | 2025-09-17 16:22:43 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:43.653282 | orchestrator | 2025-09-17 16:22:43 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:43.653581 | orchestrator | 2025-09-17 16:22:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:46.689504 | orchestrator | 2025-09-17 16:22:46 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:46.691521 | orchestrator | 2025-09-17 16:22:46 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:46.693367 | orchestrator | 2025-09-17 16:22:46 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:46.695227 | orchestrator | 2025-09-17 16:22:46 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:46.695569 | orchestrator | 2025-09-17 16:22:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:49.748343 | orchestrator | 2025-09-17 16:22:49 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:49.749490 | orchestrator | 2025-09-17 16:22:49 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:49.751003 | orchestrator | 2025-09-17 16:22:49 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:49.753785 | orchestrator | 2025-09-17 16:22:49 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:49.753809 | orchestrator | 2025-09-17 16:22:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:52.801769 | orchestrator | 2025-09-17 16:22:52 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:52.803443 | orchestrator | 2025-09-17 16:22:52 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:52.805290 | orchestrator | 2025-09-17 16:22:52 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:52.806738 | orchestrator | 2025-09-17 16:22:52 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:52.806782 | orchestrator | 2025-09-17 16:22:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:55.852118 | orchestrator | 2025-09-17 16:22:55 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:55.855275 | orchestrator | 2025-09-17 16:22:55 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:55.857158 | orchestrator | 2025-09-17 16:22:55 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:55.859336 | orchestrator | 2025-09-17 16:22:55 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:55.859376 | orchestrator | 2025-09-17 16:22:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:22:58.907429 | orchestrator | 2025-09-17 16:22:58 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:22:58.911131 | orchestrator | 2025-09-17 16:22:58 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:22:58.912669 | orchestrator | 2025-09-17 16:22:58 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:22:58.913787 | orchestrator | 2025-09-17 16:22:58 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:22:58.913814 | orchestrator | 2025-09-17 16:22:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:01.960392 | orchestrator | 2025-09-17 16:23:01 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:23:01.962355 | orchestrator | 2025-09-17 16:23:01 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:01.965604 | orchestrator | 2025-09-17 16:23:01 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:01.967327 | orchestrator | 2025-09-17 16:23:01 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:01.967838 | orchestrator | 2025-09-17 16:23:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:05.019780 | orchestrator | 2025-09-17 16:23:05 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:23:05.021502 | orchestrator | 2025-09-17 16:23:05 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:05.038303 | orchestrator | 2025-09-17 16:23:05 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:05.038367 | orchestrator | 2025-09-17 16:23:05 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:05.038378 | orchestrator | 2025-09-17 16:23:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:08.070863 | orchestrator | 2025-09-17 16:23:08 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state STARTED 2025-09-17 16:23:08.075497 | orchestrator | 2025-09-17 16:23:08 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:08.078209 | orchestrator | 2025-09-17 16:23:08 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:08.079946 | orchestrator | 2025-09-17 16:23:08 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:08.080258 | orchestrator | 2025-09-17 16:23:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:11.130719 | orchestrator | 2025-09-17 16:23:11 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:11.135438 | orchestrator | 2025-09-17 16:23:11 | INFO  | Task c7761af0-31ad-4395-bcda-18e65f169987 is in state SUCCESS 2025-09-17 16:23:11.137135 | orchestrator | 2025-09-17 16:23:11.137194 | orchestrator | 2025-09-17 16:23:11.137296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:23:11.137322 | orchestrator | 2025-09-17 16:23:11.137342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:23:11.137362 | orchestrator | Wednesday 17 September 2025 16:20:17 +0000 (0:00:00.231) 0:00:00.231 *** 2025-09-17 16:23:11.137512 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:23:11.137544 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:23:11.137565 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:23:11.137583 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:23:11.137602 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:23:11.137621 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:23:11.137639 | orchestrator | 2025-09-17 16:23:11.137658 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:23:11.137676 | orchestrator | Wednesday 17 September 2025 16:20:17 +0000 (0:00:00.615) 0:00:00.846 *** 2025-09-17 16:23:11.137695 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-17 16:23:11.137715 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-17 16:23:11.137734 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-17 16:23:11.137754 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-17 16:23:11.137774 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-17 16:23:11.137794 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-17 16:23:11.137814 | orchestrator | 2025-09-17 16:23:11.137834 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-17 16:23:11.138174 | orchestrator | 2025-09-17 16:23:11.138203 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 16:23:11.138250 | orchestrator | Wednesday 17 September 2025 16:20:18 +0000 (0:00:00.490) 0:00:01.336 *** 2025-09-17 16:23:11.138292 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:23:11.138314 | orchestrator | 2025-09-17 16:23:11.138333 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-17 16:23:11.138351 | orchestrator | Wednesday 17 September 2025 16:20:19 +0000 (0:00:01.325) 0:00:02.661 *** 2025-09-17 16:23:11.138393 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-17 16:23:11.138410 | orchestrator | 2025-09-17 16:23:11.138421 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-17 16:23:11.138432 | orchestrator | Wednesday 17 September 2025 16:20:23 +0000 (0:00:03.743) 0:00:06.405 *** 2025-09-17 16:23:11.138443 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-17 16:23:11.138454 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-17 16:23:11.138468 | orchestrator | 2025-09-17 16:23:11.138485 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-17 16:23:11.138504 | orchestrator | Wednesday 17 September 2025 16:20:28 +0000 (0:00:05.089) 0:00:11.494 *** 2025-09-17 16:23:11.138524 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:23:11.138543 | orchestrator | 2025-09-17 16:23:11.138562 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-17 16:23:11.138953 | orchestrator | Wednesday 17 September 2025 16:20:31 +0000 (0:00:02.819) 0:00:14.314 *** 2025-09-17 16:23:11.138981 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:23:11.138993 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-17 16:23:11.139004 | orchestrator | 2025-09-17 16:23:11.139014 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-17 16:23:11.139025 | orchestrator | Wednesday 17 September 2025 16:20:34 +0000 (0:00:03.306) 0:00:17.621 *** 2025-09-17 16:23:11.139036 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:23:11.139046 | orchestrator | 2025-09-17 16:23:11.139057 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-17 16:23:11.139067 | orchestrator | Wednesday 17 September 2025 16:20:37 +0000 (0:00:02.826) 0:00:20.447 *** 2025-09-17 16:23:11.139078 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-17 16:23:11.139088 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-17 16:23:11.139099 | orchestrator | 2025-09-17 16:23:11.139110 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-17 16:23:11.139120 | orchestrator | Wednesday 17 September 2025 16:20:45 +0000 (0:00:08.121) 0:00:28.568 *** 2025-09-17 16:23:11.139175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.139192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.139261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.139287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.139452 | orchestrator | 2025-09-17 16:23:11.139463 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 16:23:11.139474 | orchestrator | Wednesday 17 September 2025 16:20:48 +0000 (0:00:02.974) 0:00:31.542 *** 2025-09-17 16:23:11.139492 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.139503 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.139513 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.139525 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.139535 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.139546 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.139556 | orchestrator | 2025-09-17 16:23:11.139567 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 16:23:11.139579 | orchestrator | Wednesday 17 September 2025 16:20:49 +0000 (0:00:00.657) 0:00:32.200 *** 2025-09-17 16:23:11.139591 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.139603 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.139615 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.139627 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:23:11.139639 | orchestrator | 2025-09-17 16:23:11.139651 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-17 16:23:11.139668 | orchestrator | Wednesday 17 September 2025 16:20:50 +0000 (0:00:00.826) 0:00:33.027 *** 2025-09-17 16:23:11.139689 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-17 16:23:11.139711 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-17 16:23:11.139732 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-17 16:23:11.139859 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-17 16:23:11.139873 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-17 16:23:11.139885 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-17 16:23:11.139896 | orchestrator | 2025-09-17 16:23:11.139908 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-17 16:23:11.139920 | orchestrator | Wednesday 17 September 2025 16:20:52 +0000 (0:00:02.137) 0:00:35.164 *** 2025-09-17 16:23:11.139934 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.139948 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.139994 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.140018 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.140035 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.140046 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-17 16:23:11.140058 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140099 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140118 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140135 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140148 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140159 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-17 16:23:11.140176 | orchestrator | 2025-09-17 16:23:11.140187 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-17 16:23:11.140199 | orchestrator | Wednesday 17 September 2025 16:20:56 +0000 (0:00:03.987) 0:00:39.152 *** 2025-09-17 16:23:11.140209 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:23:11.140249 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:23:11.140259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-17 16:23:11.140270 | orchestrator | 2025-09-17 16:23:11.140280 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-17 16:23:11.140291 | orchestrator | Wednesday 17 September 2025 16:20:58 +0000 (0:00:02.499) 0:00:41.652 *** 2025-09-17 16:23:11.140334 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-17 16:23:11.140347 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-17 16:23:11.140357 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-17 16:23:11.140368 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:23:11.140378 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:23:11.140389 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-17 16:23:11.140400 | orchestrator | 2025-09-17 16:23:11.140410 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-17 16:23:11.140421 | orchestrator | Wednesday 17 September 2025 16:21:02 +0000 (0:00:03.619) 0:00:45.272 *** 2025-09-17 16:23:11.140431 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-17 16:23:11.140442 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-17 16:23:11.140452 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-17 16:23:11.140463 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-17 16:23:11.140474 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-17 16:23:11.140484 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-17 16:23:11.140494 | orchestrator | 2025-09-17 16:23:11.140505 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-17 16:23:11.140515 | orchestrator | Wednesday 17 September 2025 16:21:03 +0000 (0:00:00.951) 0:00:46.223 *** 2025-09-17 16:23:11.140526 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.140536 | orchestrator | 2025-09-17 16:23:11.140547 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-17 16:23:11.140557 | orchestrator | Wednesday 17 September 2025 16:21:03 +0000 (0:00:00.110) 0:00:46.333 *** 2025-09-17 16:23:11.140573 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.140584 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.140594 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.140605 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.140615 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.140626 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.140636 | orchestrator | 2025-09-17 16:23:11.140647 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 16:23:11.140657 | orchestrator | Wednesday 17 September 2025 16:21:04 +0000 (0:00:00.652) 0:00:46.986 *** 2025-09-17 16:23:11.140670 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:23:11.140681 | orchestrator | 2025-09-17 16:23:11.140692 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-17 16:23:11.140702 | orchestrator | Wednesday 17 September 2025 16:21:05 +0000 (0:00:01.358) 0:00:48.344 *** 2025-09-17 16:23:11.140714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.140740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.140785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.140804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.140959 | orchestrator | 2025-09-17 16:23:11.140969 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-17 16:23:11.140980 | orchestrator | Wednesday 17 September 2025 16:21:08 +0000 (0:00:03.256) 0:00:51.602 *** 2025-09-17 16:23:11.141019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141044 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.141060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141112 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.141123 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.141140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141174 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.141186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141208 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.141244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141280 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.141291 | orchestrator | 2025-09-17 16:23:11.141301 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-17 16:23:11.141312 | orchestrator | Wednesday 17 September 2025 16:21:09 +0000 (0:00:01.137) 0:00:52.739 *** 2025-09-17 16:23:11.141328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141358 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.141369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141392 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.141410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.141422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141438 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.141454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141477 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.141488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141517 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.141528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.141562 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.141572 | orchestrator | 2025-09-17 16:23:11.141583 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-17 16:23:11.141594 | orchestrator | Wednesday 17 September 2025 16:21:11 +0000 (0:00:02.019) 0:00:54.758 *** 2025-09-17 16:23:11.141605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.141782 | orchestrator | 2025-09-17 16:23:11.141793 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-17 16:23:11.141804 | orchestrator | Wednesday 17 September 2025 16:21:14 +0000 (0:00:02.712) 0:00:57.470 *** 2025-09-17 16:23:11.141815 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 16:23:11.141826 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.141836 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 16:23:11.141847 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.141858 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-17 16:23:11.141868 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.141879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 16:23:11.141896 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 16:23:11.141913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-17 16:23:11.141924 | orchestrator | 2025-09-17 16:23:11.141934 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-17 16:23:11.141945 | orchestrator | Wednesday 17 September 2025 16:21:16 +0000 (0:00:01.690) 0:00:59.161 *** 2025-09-17 16:23:11.141956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.141995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142171 | orchestrator | 2025-09-17 16:23:11.142187 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-17 16:23:11.142198 | orchestrator | Wednesday 17 September 2025 16:21:24 +0000 (0:00:08.021) 0:01:07.183 *** 2025-09-17 16:23:11.142209 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.142253 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.142271 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.142282 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:23:11.142292 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:23:11.142303 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:23:11.142313 | orchestrator | 2025-09-17 16:23:11.142324 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-17 16:23:11.142334 | orchestrator | Wednesday 17 September 2025 16:21:26 +0000 (0:00:02.225) 0:01:09.408 *** 2025-09-17 16:23:11.142345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.142357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142375 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.142393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.142405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142416 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.142432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-17 16:23:11.142444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142454 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.142465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142495 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.142512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142540 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.142551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-17 16:23:11.142584 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.142595 | orchestrator | 2025-09-17 16:23:11.142606 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-17 16:23:11.142616 | orchestrator | Wednesday 17 September 2025 16:21:27 +0000 (0:00:00.818) 0:01:10.226 *** 2025-09-17 16:23:11.142627 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.142637 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.142648 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.142658 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.142669 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.142679 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.142689 | orchestrator | 2025-09-17 16:23:11.142700 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-17 16:23:11.142710 | orchestrator | Wednesday 17 September 2025 16:21:27 +0000 (0:00:00.591) 0:01:10.818 *** 2025-09-17 16:23:11.142728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.142740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.142752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-17 16:23:11.142772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-17 16:23:11.142938 | orchestrator | 2025-09-17 16:23:11.142949 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-17 16:23:11.142960 | orchestrator | Wednesday 17 September 2025 16:21:29 +0000 (0:00:02.113) 0:01:12.931 *** 2025-09-17 16:23:11.142970 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.142981 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:23:11.142991 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:23:11.143002 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:23:11.143012 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:23:11.143023 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:23:11.143033 | orchestrator | 2025-09-17 16:23:11.143044 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-17 16:23:11.143054 | orchestrator | Wednesday 17 September 2025 16:21:30 +0000 (0:00:00.705) 0:01:13.637 *** 2025-09-17 16:23:11.143064 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:23:11.143075 | orchestrator | 2025-09-17 16:23:11.143090 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-17 16:23:11.143101 | orchestrator | Wednesday 17 September 2025 16:21:32 +0000 (0:00:02.190) 0:01:15.828 *** 2025-09-17 16:23:11.143118 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:23:11.143128 | orchestrator | 2025-09-17 16:23:11.143139 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-17 16:23:11.143150 | orchestrator | Wednesday 17 September 2025 16:21:35 +0000 (0:00:02.573) 0:01:18.401 *** 2025-09-17 16:23:11.143160 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:23:11.143171 | orchestrator | 2025-09-17 16:23:11.143181 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143192 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:19.009) 0:01:37.411 *** 2025-09-17 16:23:11.143202 | orchestrator | 2025-09-17 16:23:11.143252 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143265 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.072) 0:01:37.484 *** 2025-09-17 16:23:11.143276 | orchestrator | 2025-09-17 16:23:11.143286 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143297 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.069) 0:01:37.553 *** 2025-09-17 16:23:11.143307 | orchestrator | 2025-09-17 16:23:11.143318 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143329 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.061) 0:01:37.615 *** 2025-09-17 16:23:11.143339 | orchestrator | 2025-09-17 16:23:11.143349 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143360 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.082) 0:01:37.697 *** 2025-09-17 16:23:11.143370 | orchestrator | 2025-09-17 16:23:11.143381 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-17 16:23:11.143391 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.067) 0:01:37.765 *** 2025-09-17 16:23:11.143401 | orchestrator | 2025-09-17 16:23:11.143412 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-17 16:23:11.143422 | orchestrator | Wednesday 17 September 2025 16:21:54 +0000 (0:00:00.095) 0:01:37.860 *** 2025-09-17 16:23:11.143433 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:23:11.143443 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:23:11.143454 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:23:11.143464 | orchestrator | 2025-09-17 16:23:11.143475 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-17 16:23:11.143485 | orchestrator | Wednesday 17 September 2025 16:22:22 +0000 (0:00:27.406) 0:02:05.267 *** 2025-09-17 16:23:11.143496 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:23:11.143506 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:23:11.143516 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:23:11.143527 | orchestrator | 2025-09-17 16:23:11.143537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-17 16:23:11.143547 | orchestrator | Wednesday 17 September 2025 16:22:32 +0000 (0:00:10.217) 0:02:15.485 *** 2025-09-17 16:23:11.143558 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:23:11.143568 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:23:11.143579 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:23:11.143589 | orchestrator | 2025-09-17 16:23:11.143600 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-17 16:23:11.143610 | orchestrator | Wednesday 17 September 2025 16:23:01 +0000 (0:00:29.390) 0:02:44.876 *** 2025-09-17 16:23:11.143621 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:23:11.143631 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:23:11.143641 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:23:11.143652 | orchestrator | 2025-09-17 16:23:11.143662 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-17 16:23:11.143673 | orchestrator | Wednesday 17 September 2025 16:23:07 +0000 (0:00:05.204) 0:02:50.080 *** 2025-09-17 16:23:11.143684 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:23:11.143694 | orchestrator | 2025-09-17 16:23:11.143712 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:23:11.143730 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-17 16:23:11.143742 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 16:23:11.143753 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 16:23:11.143763 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 16:23:11.143774 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 16:23:11.143784 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-17 16:23:11.143795 | orchestrator | 2025-09-17 16:23:11.143806 | orchestrator | 2025-09-17 16:23:11.143816 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:23:11.143827 | orchestrator | Wednesday 17 September 2025 16:23:07 +0000 (0:00:00.643) 0:02:50.723 *** 2025-09-17 16:23:11.143837 | orchestrator | =============================================================================== 2025-09-17 16:23:11.143848 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 29.39s 2025-09-17 16:23:11.143858 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.41s 2025-09-17 16:23:11.143874 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.01s 2025-09-17 16:23:11.143885 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.22s 2025-09-17 16:23:11.143895 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.12s 2025-09-17 16:23:11.143906 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.02s 2025-09-17 16:23:11.143916 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.20s 2025-09-17 16:23:11.143926 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.09s 2025-09-17 16:23:11.143937 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.99s 2025-09-17 16:23:11.143947 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.74s 2025-09-17 16:23:11.143958 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.62s 2025-09-17 16:23:11.143968 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.31s 2025-09-17 16:23:11.143979 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.26s 2025-09-17 16:23:11.143989 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.97s 2025-09-17 16:23:11.143999 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.83s 2025-09-17 16:23:11.144010 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.82s 2025-09-17 16:23:11.144020 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.71s 2025-09-17 16:23:11.144030 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.57s 2025-09-17 16:23:11.144041 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.50s 2025-09-17 16:23:11.144052 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.23s 2025-09-17 16:23:11.144062 | orchestrator | 2025-09-17 16:23:11 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:11.145407 | orchestrator | 2025-09-17 16:23:11 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:11.146344 | orchestrator | 2025-09-17 16:23:11 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:11.146494 | orchestrator | 2025-09-17 16:23:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:14.190165 | orchestrator | 2025-09-17 16:23:14 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:14.193483 | orchestrator | 2025-09-17 16:23:14 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:14.196283 | orchestrator | 2025-09-17 16:23:14 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:14.198827 | orchestrator | 2025-09-17 16:23:14 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:14.199197 | orchestrator | 2025-09-17 16:23:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:17.250078 | orchestrator | 2025-09-17 16:23:17 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:17.251796 | orchestrator | 2025-09-17 16:23:17 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:17.254865 | orchestrator | 2025-09-17 16:23:17 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state STARTED 2025-09-17 16:23:17.258597 | orchestrator | 2025-09-17 16:23:17 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:17.262058 | orchestrator | 2025-09-17 16:23:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:20.297250 | orchestrator | 2025-09-17 16:23:20 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:20.298455 | orchestrator | 2025-09-17 16:23:20 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:20.299650 | orchestrator | 2025-09-17 16:23:20 | INFO  | Task 26381440-eddc-4c4b-85a7-e42a4fe97296 is in state SUCCESS 2025-09-17 16:23:20.300543 | orchestrator | 2025-09-17 16:23:20.300578 | orchestrator | 2025-09-17 16:23:20.300591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:23:20.300605 | orchestrator | 2025-09-17 16:23:20.300618 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:23:20.300630 | orchestrator | Wednesday 17 September 2025 16:22:26 +0000 (0:00:00.250) 0:00:00.250 *** 2025-09-17 16:23:20.300641 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:23:20.300653 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:23:20.300664 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:23:20.300674 | orchestrator | 2025-09-17 16:23:20.300685 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:23:20.300696 | orchestrator | Wednesday 17 September 2025 16:22:27 +0000 (0:00:00.281) 0:00:00.531 *** 2025-09-17 16:23:20.300706 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-17 16:23:20.300758 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-17 16:23:20.300771 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-17 16:23:20.300782 | orchestrator | 2025-09-17 16:23:20.300810 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-17 16:23:20.300822 | orchestrator | 2025-09-17 16:23:20.300832 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-17 16:23:20.300843 | orchestrator | Wednesday 17 September 2025 16:22:27 +0000 (0:00:00.390) 0:00:00.922 *** 2025-09-17 16:23:20.300854 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:23:20.300866 | orchestrator | 2025-09-17 16:23:20.300876 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-17 16:23:20.300888 | orchestrator | Wednesday 17 September 2025 16:22:28 +0000 (0:00:00.517) 0:00:01.439 *** 2025-09-17 16:23:20.300924 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-17 16:23:20.300935 | orchestrator | 2025-09-17 16:23:20.300946 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-17 16:23:20.300956 | orchestrator | Wednesday 17 September 2025 16:22:31 +0000 (0:00:03.550) 0:00:04.989 *** 2025-09-17 16:23:20.300967 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-17 16:23:20.300978 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-17 16:23:20.300988 | orchestrator | 2025-09-17 16:23:20.300999 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-17 16:23:20.301010 | orchestrator | Wednesday 17 September 2025 16:22:38 +0000 (0:00:06.716) 0:00:11.705 *** 2025-09-17 16:23:20.301020 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:23:20.301031 | orchestrator | 2025-09-17 16:23:20.301042 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-17 16:23:20.301086 | orchestrator | Wednesday 17 September 2025 16:22:41 +0000 (0:00:03.384) 0:00:15.090 *** 2025-09-17 16:23:20.301098 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:23:20.301109 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-17 16:23:20.301120 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-17 16:23:20.301130 | orchestrator | 2025-09-17 16:23:20.301141 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-17 16:23:20.301152 | orchestrator | Wednesday 17 September 2025 16:22:50 +0000 (0:00:08.586) 0:00:23.677 *** 2025-09-17 16:23:20.301163 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:23:20.301174 | orchestrator | 2025-09-17 16:23:20.301184 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-17 16:23:20.301195 | orchestrator | Wednesday 17 September 2025 16:22:53 +0000 (0:00:03.112) 0:00:26.790 *** 2025-09-17 16:23:20.301228 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-17 16:23:20.301238 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-17 16:23:20.301249 | orchestrator | 2025-09-17 16:23:20.301260 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-17 16:23:20.301270 | orchestrator | Wednesday 17 September 2025 16:22:59 +0000 (0:00:06.095) 0:00:32.885 *** 2025-09-17 16:23:20.301281 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-17 16:23:20.301291 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-17 16:23:20.301302 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-17 16:23:20.301312 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-17 16:23:20.301323 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-17 16:23:20.301334 | orchestrator | 2025-09-17 16:23:20.301344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-17 16:23:20.301355 | orchestrator | Wednesday 17 September 2025 16:23:13 +0000 (0:00:13.997) 0:00:46.883 *** 2025-09-17 16:23:20.301365 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:23:20.301376 | orchestrator | 2025-09-17 16:23:20.301387 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-17 16:23:20.301397 | orchestrator | Wednesday 17 September 2025 16:23:14 +0000 (0:00:00.545) 0:00:47.428 *** 2025-09-17 16:23:20.301441 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-09-17 16:23:20.301607 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1758126195.6006448-6622-229960446967611/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1758126195.6006448-6622-229960446967611/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1758126195.6006448-6622-229960446967611/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_76x40wv7/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_76x40wv7/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_76x40wv7/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_76x40wv7/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-09-17 16:23:20.301686 | orchestrator | 2025-09-17 16:23:20.301698 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:23:20.301709 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-17 16:23:20.301721 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:23:20.301733 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:23:20.301743 | orchestrator | 2025-09-17 16:23:20.301754 | orchestrator | 2025-09-17 16:23:20.301772 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:23:20.301782 | orchestrator | Wednesday 17 September 2025 16:23:17 +0000 (0:00:03.616) 0:00:51.045 *** 2025-09-17 16:23:20.301803 | orchestrator | =============================================================================== 2025-09-17 16:23:20.301814 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.00s 2025-09-17 16:23:20.301824 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.59s 2025-09-17 16:23:20.301835 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.72s 2025-09-17 16:23:20.301845 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.10s 2025-09-17 16:23:20.301856 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.62s 2025-09-17 16:23:20.301866 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.55s 2025-09-17 16:23:20.301877 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.38s 2025-09-17 16:23:20.301887 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.11s 2025-09-17 16:23:20.301903 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-09-17 16:23:20.301914 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.52s 2025-09-17 16:23:20.301925 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-09-17 16:23:20.301935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-17 16:23:20.301946 | orchestrator | 2025-09-17 16:23:20 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:20.301957 | orchestrator | 2025-09-17 16:23:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:23.334782 | orchestrator | 2025-09-17 16:23:23 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:23.337557 | orchestrator | 2025-09-17 16:23:23 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:23.339246 | orchestrator | 2025-09-17 16:23:23 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:23.339282 | orchestrator | 2025-09-17 16:23:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:26.381175 | orchestrator | 2025-09-17 16:23:26 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:26.382079 | orchestrator | 2025-09-17 16:23:26 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:26.385287 | orchestrator | 2025-09-17 16:23:26 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:26.385336 | orchestrator | 2025-09-17 16:23:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:29.427919 | orchestrator | 2025-09-17 16:23:29 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:29.432369 | orchestrator | 2025-09-17 16:23:29 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:29.437696 | orchestrator | 2025-09-17 16:23:29 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:29.437724 | orchestrator | 2025-09-17 16:23:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:32.481017 | orchestrator | 2025-09-17 16:23:32 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:32.482634 | orchestrator | 2025-09-17 16:23:32 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:32.483097 | orchestrator | 2025-09-17 16:23:32 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:32.483129 | orchestrator | 2025-09-17 16:23:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:35.523922 | orchestrator | 2025-09-17 16:23:35 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:35.525377 | orchestrator | 2025-09-17 16:23:35 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:35.527731 | orchestrator | 2025-09-17 16:23:35 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:35.528389 | orchestrator | 2025-09-17 16:23:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:38.567487 | orchestrator | 2025-09-17 16:23:38 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:38.568728 | orchestrator | 2025-09-17 16:23:38 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:38.570485 | orchestrator | 2025-09-17 16:23:38 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:38.570509 | orchestrator | 2025-09-17 16:23:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:41.611788 | orchestrator | 2025-09-17 16:23:41 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:41.613676 | orchestrator | 2025-09-17 16:23:41 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:41.615701 | orchestrator | 2025-09-17 16:23:41 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:41.615734 | orchestrator | 2025-09-17 16:23:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:44.658596 | orchestrator | 2025-09-17 16:23:44 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:44.660227 | orchestrator | 2025-09-17 16:23:44 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:44.663150 | orchestrator | 2025-09-17 16:23:44 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:44.663960 | orchestrator | 2025-09-17 16:23:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:47.708585 | orchestrator | 2025-09-17 16:23:47 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:47.709839 | orchestrator | 2025-09-17 16:23:47 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:47.711813 | orchestrator | 2025-09-17 16:23:47 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:47.711975 | orchestrator | 2025-09-17 16:23:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:50.752091 | orchestrator | 2025-09-17 16:23:50 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:50.753431 | orchestrator | 2025-09-17 16:23:50 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:50.755906 | orchestrator | 2025-09-17 16:23:50 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:50.755935 | orchestrator | 2025-09-17 16:23:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:53.794759 | orchestrator | 2025-09-17 16:23:53 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:53.795510 | orchestrator | 2025-09-17 16:23:53 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:53.796613 | orchestrator | 2025-09-17 16:23:53 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:53.796793 | orchestrator | 2025-09-17 16:23:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:56.841272 | orchestrator | 2025-09-17 16:23:56 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:56.843425 | orchestrator | 2025-09-17 16:23:56 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:56.845910 | orchestrator | 2025-09-17 16:23:56 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:56.845934 | orchestrator | 2025-09-17 16:23:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:23:59.884647 | orchestrator | 2025-09-17 16:23:59 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:23:59.886656 | orchestrator | 2025-09-17 16:23:59 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:23:59.888961 | orchestrator | 2025-09-17 16:23:59 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:23:59.889185 | orchestrator | 2025-09-17 16:23:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:02.932185 | orchestrator | 2025-09-17 16:24:02 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:02.933381 | orchestrator | 2025-09-17 16:24:02 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:02.935206 | orchestrator | 2025-09-17 16:24:02 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:02.935230 | orchestrator | 2025-09-17 16:24:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:05.987957 | orchestrator | 2025-09-17 16:24:05 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:05.989284 | orchestrator | 2025-09-17 16:24:05 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:05.991390 | orchestrator | 2025-09-17 16:24:05 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:05.991425 | orchestrator | 2025-09-17 16:24:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:09.035821 | orchestrator | 2025-09-17 16:24:09 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:09.037452 | orchestrator | 2025-09-17 16:24:09 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:09.039089 | orchestrator | 2025-09-17 16:24:09 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:09.039168 | orchestrator | 2025-09-17 16:24:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:12.077736 | orchestrator | 2025-09-17 16:24:12 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:12.078567 | orchestrator | 2025-09-17 16:24:12 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:12.080048 | orchestrator | 2025-09-17 16:24:12 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:12.080091 | orchestrator | 2025-09-17 16:24:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:15.125931 | orchestrator | 2025-09-17 16:24:15 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:15.127111 | orchestrator | 2025-09-17 16:24:15 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:15.128342 | orchestrator | 2025-09-17 16:24:15 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:15.128375 | orchestrator | 2025-09-17 16:24:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:18.172976 | orchestrator | 2025-09-17 16:24:18 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:18.173154 | orchestrator | 2025-09-17 16:24:18 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:18.174068 | orchestrator | 2025-09-17 16:24:18 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:18.174105 | orchestrator | 2025-09-17 16:24:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:21.224395 | orchestrator | 2025-09-17 16:24:21 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:21.226233 | orchestrator | 2025-09-17 16:24:21 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:21.230082 | orchestrator | 2025-09-17 16:24:21 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:21.230130 | orchestrator | 2025-09-17 16:24:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:24.276159 | orchestrator | 2025-09-17 16:24:24 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:24.277714 | orchestrator | 2025-09-17 16:24:24 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:24.278873 | orchestrator | 2025-09-17 16:24:24 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:24.278898 | orchestrator | 2025-09-17 16:24:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:27.315510 | orchestrator | 2025-09-17 16:24:27 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:27.316463 | orchestrator | 2025-09-17 16:24:27 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:27.318263 | orchestrator | 2025-09-17 16:24:27 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:27.318415 | orchestrator | 2025-09-17 16:24:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:30.362450 | orchestrator | 2025-09-17 16:24:30 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:30.364913 | orchestrator | 2025-09-17 16:24:30 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:30.367376 | orchestrator | 2025-09-17 16:24:30 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:30.367997 | orchestrator | 2025-09-17 16:24:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:33.414623 | orchestrator | 2025-09-17 16:24:33 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:33.416029 | orchestrator | 2025-09-17 16:24:33 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:33.417823 | orchestrator | 2025-09-17 16:24:33 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:33.418173 | orchestrator | 2025-09-17 16:24:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:36.461484 | orchestrator | 2025-09-17 16:24:36 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:36.464764 | orchestrator | 2025-09-17 16:24:36 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:36.466418 | orchestrator | 2025-09-17 16:24:36 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:36.466844 | orchestrator | 2025-09-17 16:24:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:39.510750 | orchestrator | 2025-09-17 16:24:39 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:39.512004 | orchestrator | 2025-09-17 16:24:39 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:39.514078 | orchestrator | 2025-09-17 16:24:39 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:39.514161 | orchestrator | 2025-09-17 16:24:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:42.561250 | orchestrator | 2025-09-17 16:24:42 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:42.564333 | orchestrator | 2025-09-17 16:24:42 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:42.567042 | orchestrator | 2025-09-17 16:24:42 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:42.567300 | orchestrator | 2025-09-17 16:24:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:45.628728 | orchestrator | 2025-09-17 16:24:45 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:45.631138 | orchestrator | 2025-09-17 16:24:45 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:45.633494 | orchestrator | 2025-09-17 16:24:45 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:45.633526 | orchestrator | 2025-09-17 16:24:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:48.665324 | orchestrator | 2025-09-17 16:24:48 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:48.665417 | orchestrator | 2025-09-17 16:24:48 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:48.666752 | orchestrator | 2025-09-17 16:24:48 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:48.666784 | orchestrator | 2025-09-17 16:24:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:51.706640 | orchestrator | 2025-09-17 16:24:51 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:51.706730 | orchestrator | 2025-09-17 16:24:51 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:51.707385 | orchestrator | 2025-09-17 16:24:51 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state STARTED 2025-09-17 16:24:51.707407 | orchestrator | 2025-09-17 16:24:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:54.751428 | orchestrator | 2025-09-17 16:24:54 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:54.752216 | orchestrator | 2025-09-17 16:24:54 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:54.752708 | orchestrator | 2025-09-17 16:24:54 | INFO  | Task 1d3be422-077a-4eaf-b93e-2415f9e8fa9c is in state SUCCESS 2025-09-17 16:24:54.753002 | orchestrator | 2025-09-17 16:24:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:24:57.786913 | orchestrator | 2025-09-17 16:24:57 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:24:57.788818 | orchestrator | 2025-09-17 16:24:57 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:24:57.788848 | orchestrator | 2025-09-17 16:24:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:00.827350 | orchestrator | 2025-09-17 16:25:00 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:00.829133 | orchestrator | 2025-09-17 16:25:00 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:00.829211 | orchestrator | 2025-09-17 16:25:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:03.864969 | orchestrator | 2025-09-17 16:25:03 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:03.865886 | orchestrator | 2025-09-17 16:25:03 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:03.866351 | orchestrator | 2025-09-17 16:25:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:06.906230 | orchestrator | 2025-09-17 16:25:06 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:06.907718 | orchestrator | 2025-09-17 16:25:06 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:06.907949 | orchestrator | 2025-09-17 16:25:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:09.966416 | orchestrator | 2025-09-17 16:25:09 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:09.966496 | orchestrator | 2025-09-17 16:25:09 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:09.966510 | orchestrator | 2025-09-17 16:25:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:13.008377 | orchestrator | 2025-09-17 16:25:13 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:13.009414 | orchestrator | 2025-09-17 16:25:13 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:13.009443 | orchestrator | 2025-09-17 16:25:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:16.055129 | orchestrator | 2025-09-17 16:25:16 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:16.055296 | orchestrator | 2025-09-17 16:25:16 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:16.055316 | orchestrator | 2025-09-17 16:25:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:19.100805 | orchestrator | 2025-09-17 16:25:19 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state STARTED 2025-09-17 16:25:19.102498 | orchestrator | 2025-09-17 16:25:19 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:19.102754 | orchestrator | 2025-09-17 16:25:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:22.144693 | orchestrator | 2025-09-17 16:25:22 | INFO  | Task ef2071b4-b861-453b-82c0-70021d4f2acb is in state SUCCESS 2025-09-17 16:25:22.147345 | orchestrator | 2025-09-17 16:25:22.147483 | orchestrator | 2025-09-17 16:25:22.147541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:25:22.147555 | orchestrator | 2025-09-17 16:25:22.147565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:25:22.147576 | orchestrator | Wednesday 17 September 2025 16:21:24 +0000 (0:00:00.215) 0:00:00.215 *** 2025-09-17 16:25:22.147586 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.147597 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:25:22.147606 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:25:22.147616 | orchestrator | 2025-09-17 16:25:22.147626 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:25:22.147635 | orchestrator | Wednesday 17 September 2025 16:21:25 +0000 (0:00:00.314) 0:00:00.529 *** 2025-09-17 16:25:22.147645 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-17 16:25:22.147655 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-17 16:25:22.147664 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-17 16:25:22.147674 | orchestrator | 2025-09-17 16:25:22.147683 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-17 16:25:22.147693 | orchestrator | 2025-09-17 16:25:22.147702 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-17 16:25:22.147712 | orchestrator | Wednesday 17 September 2025 16:21:25 +0000 (0:00:00.525) 0:00:01.055 *** 2025-09-17 16:25:22.147744 | orchestrator | 2025-09-17 16:25:22.147754 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-17 16:25:22.147763 | orchestrator | 2025-09-17 16:25:22.147773 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-17 16:25:22.147782 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.147791 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:25:22.147800 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:25:22.147811 | orchestrator | 2025-09-17 16:25:22.147821 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:25:22.147833 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:25:22.147846 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:25:22.147857 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:25:22.147867 | orchestrator | 2025-09-17 16:25:22.147878 | orchestrator | 2025-09-17 16:25:22.147888 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:25:22.147899 | orchestrator | Wednesday 17 September 2025 16:24:51 +0000 (0:03:25.805) 0:03:26.861 *** 2025-09-17 16:25:22.147910 | orchestrator | =============================================================================== 2025-09-17 16:25:22.147921 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 205.81s 2025-09-17 16:25:22.147932 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-17 16:25:22.147943 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-17 16:25:22.147953 | orchestrator | 2025-09-17 16:25:22.147963 | orchestrator | 2025-09-17 16:25:22.147974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:25:22.147984 | orchestrator | 2025-09-17 16:25:22.147994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:25:22.148004 | orchestrator | Wednesday 17 September 2025 16:23:11 +0000 (0:00:00.251) 0:00:00.251 *** 2025-09-17 16:25:22.148015 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.148025 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:25:22.148064 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:25:22.148075 | orchestrator | 2025-09-17 16:25:22.148085 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:25:22.148095 | orchestrator | Wednesday 17 September 2025 16:23:12 +0000 (0:00:00.286) 0:00:00.538 *** 2025-09-17 16:25:22.148106 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-17 16:25:22.148117 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-17 16:25:22.148128 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-17 16:25:22.148138 | orchestrator | 2025-09-17 16:25:22.148149 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-17 16:25:22.148159 | orchestrator | 2025-09-17 16:25:22.148170 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-17 16:25:22.148188 | orchestrator | Wednesday 17 September 2025 16:23:12 +0000 (0:00:00.379) 0:00:00.917 *** 2025-09-17 16:25:22.148197 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:25:22.148207 | orchestrator | 2025-09-17 16:25:22.148216 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-17 16:25:22.148226 | orchestrator | Wednesday 17 September 2025 16:23:13 +0000 (0:00:00.500) 0:00:01.418 *** 2025-09-17 16:25:22.148238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148304 | orchestrator | 2025-09-17 16:25:22.148314 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-17 16:25:22.148323 | orchestrator | Wednesday 17 September 2025 16:23:13 +0000 (0:00:00.732) 0:00:02.151 *** 2025-09-17 16:25:22.148333 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-17 16:25:22.148343 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-17 16:25:22.148352 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:25:22.148362 | orchestrator | 2025-09-17 16:25:22.148372 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-17 16:25:22.148381 | orchestrator | Wednesday 17 September 2025 16:23:14 +0000 (0:00:00.792) 0:00:02.943 *** 2025-09-17 16:25:22.148390 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:25:22.148400 | orchestrator | 2025-09-17 16:25:22.148409 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-17 16:25:22.148418 | orchestrator | Wednesday 17 September 2025 16:23:15 +0000 (0:00:00.615) 0:00:03.559 *** 2025-09-17 16:25:22.148428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148477 | orchestrator | 2025-09-17 16:25:22.148486 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-17 16:25:22.148496 | orchestrator | Wednesday 17 September 2025 16:23:16 +0000 (0:00:01.349) 0:00:04.909 *** 2025-09-17 16:25:22.148505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148515 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.148525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148535 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.148545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148554 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.148564 | orchestrator | 2025-09-17 16:25:22.148573 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-17 16:25:22.148582 | orchestrator | Wednesday 17 September 2025 16:23:16 +0000 (0:00:00.331) 0:00:05.240 *** 2025-09-17 16:25:22.148596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148623 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.148632 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.148648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-17 16:25:22.148658 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.148667 | orchestrator | 2025-09-17 16:25:22.148677 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-17 16:25:22.148686 | orchestrator | Wednesday 17 September 2025 16:23:17 +0000 (0:00:00.777) 0:00:06.017 *** 2025-09-17 16:25:22.148695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148731 | orchestrator | 2025-09-17 16:25:22.148740 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-17 16:25:22.148754 | orchestrator | Wednesday 17 September 2025 16:23:18 +0000 (0:00:01.209) 0:00:07.227 *** 2025-09-17 16:25:22.148764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.148801 | orchestrator | 2025-09-17 16:25:22.148811 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-17 16:25:22.148820 | orchestrator | Wednesday 17 September 2025 16:23:20 +0000 (0:00:01.282) 0:00:08.510 *** 2025-09-17 16:25:22.148830 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.148839 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.148848 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.148858 | orchestrator | 2025-09-17 16:25:22.148867 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-17 16:25:22.148877 | orchestrator | Wednesday 17 September 2025 16:23:20 +0000 (0:00:00.570) 0:00:09.080 *** 2025-09-17 16:25:22.148886 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 16:25:22.148896 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 16:25:22.148905 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-17 16:25:22.148914 | orchestrator | 2025-09-17 16:25:22.148924 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-17 16:25:22.148939 | orchestrator | Wednesday 17 September 2025 16:23:22 +0000 (0:00:01.240) 0:00:10.320 *** 2025-09-17 16:25:22.148948 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 16:25:22.148958 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 16:25:22.148967 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-17 16:25:22.148977 | orchestrator | 2025-09-17 16:25:22.148986 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-17 16:25:22.148995 | orchestrator | Wednesday 17 September 2025 16:23:23 +0000 (0:00:01.405) 0:00:11.726 *** 2025-09-17 16:25:22.149005 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:25:22.149014 | orchestrator | 2025-09-17 16:25:22.149023 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-17 16:25:22.149056 | orchestrator | Wednesday 17 September 2025 16:23:24 +0000 (0:00:00.722) 0:00:12.448 *** 2025-09-17 16:25:22.149066 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-17 16:25:22.149075 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-17 16:25:22.149085 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.149094 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:25:22.149103 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:25:22.149113 | orchestrator | 2025-09-17 16:25:22.149122 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-17 16:25:22.149136 | orchestrator | Wednesday 17 September 2025 16:23:24 +0000 (0:00:00.732) 0:00:13.180 *** 2025-09-17 16:25:22.149145 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.149155 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.149164 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.149173 | orchestrator | 2025-09-17 16:25:22.149182 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-17 16:25:22.149192 | orchestrator | Wednesday 17 September 2025 16:23:25 +0000 (0:00:00.542) 0:00:13.723 *** 2025-09-17 16:25:22.149202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1060860, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5762289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1060860, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5762289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1060860, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5762289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1060927, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.588465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1060927, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.588465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1060927, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.588465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1060888, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1060888, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1060888, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1060929, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5891433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1060929, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5891433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1060929, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5891433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1060904, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.583992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1060904, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.583992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1060904, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.583992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1060919, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5870152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1060919, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5870152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1060919, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5870152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1060857, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5731459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1060857, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5731459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1060857, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5731459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1060881, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.576856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1060881, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.576856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1060881, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.576856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1060891, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1060891, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1060891, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.578215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1060913, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5857646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1060913, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5857646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1060913, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5857646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1060924, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5882535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1060924, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5882535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1060924, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5882535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1060883, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5779107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1060883, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5779107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1060883, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5779107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1060918, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1060918, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1060918, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1060906, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5845606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1060906, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5845606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1060906, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5845606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1060901, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5834284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1060901, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5834284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1060901, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5834284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1060898, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.582143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1060898, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.582143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1060898, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.582143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1060917, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1060917, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1060917, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5861433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1060893, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5804582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1060893, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5804582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1060893, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5804582, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1060921, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5871432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1060921, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5871432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061390, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.721767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1060921, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5871432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061390, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.721767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061390, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.721767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1060992, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6631444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1060992, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6631444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1060940, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5931432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.149991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1060992, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6631444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1060940, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5931432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061090, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6671445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1060940, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5931432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061090, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6671445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1060934, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5906615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061090, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6671445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1060934, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5906615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1060934, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5906615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061211, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6957977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061211, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6957977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061093, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6934814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061211, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6957977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061093, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6934814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061093, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6934814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061216, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6961722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061216, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6961722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061216, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6961722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061378, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7207232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.150998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061378, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7207232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061206, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6951008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061378, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7207232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061206, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6951008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061081, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6662498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061206, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6951008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061081, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6662498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1060985, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6432505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061081, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6662498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1060985, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6432505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061073, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6642516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1060985, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6432505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061073, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6642516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1060942, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.641378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061073, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6642516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1060942, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.641378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061088, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6666327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1060942, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.641378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061088, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6666327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061238, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7196198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061088, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6666327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061238, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7196198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061238, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.7196198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061221, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.698762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061221, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.698762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061221, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.698762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1060936, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5911434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1060936, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5911434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1060936, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5911434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1060939, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5921433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1060939, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5921433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1060939, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.5921433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061199, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6944792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061199, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6944792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061219, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.696512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061199, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.6944792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061219, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.696512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061219, 'dev': 110, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758123291.696512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-17 16:25:22.151553 | orchestrator | 2025-09-17 16:25:22.151565 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-17 16:25:22.151577 | orchestrator | Wednesday 17 September 2025 16:24:02 +0000 (0:00:37.567) 0:00:51.291 *** 2025-09-17 16:25:22.151594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.151606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.151617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-17 16:25:22.151633 | orchestrator | 2025-09-17 16:25:22.151644 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-17 16:25:22.151655 | orchestrator | Wednesday 17 September 2025 16:24:04 +0000 (0:00:01.072) 0:00:52.364 *** 2025-09-17 16:25:22.151666 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:25:22.151676 | orchestrator | 2025-09-17 16:25:22.151685 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-17 16:25:22.151695 | orchestrator | Wednesday 17 September 2025 16:24:06 +0000 (0:00:02.443) 0:00:54.807 *** 2025-09-17 16:25:22.151704 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:25:22.151713 | orchestrator | 2025-09-17 16:25:22.151722 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 16:25:22.151732 | orchestrator | Wednesday 17 September 2025 16:24:08 +0000 (0:00:02.353) 0:00:57.161 *** 2025-09-17 16:25:22.151742 | orchestrator | 2025-09-17 16:25:22.151752 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 16:25:22.151761 | orchestrator | Wednesday 17 September 2025 16:24:09 +0000 (0:00:00.246) 0:00:57.407 *** 2025-09-17 16:25:22.151770 | orchestrator | 2025-09-17 16:25:22.151780 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-17 16:25:22.151789 | orchestrator | Wednesday 17 September 2025 16:24:09 +0000 (0:00:00.064) 0:00:57.472 *** 2025-09-17 16:25:22.151799 | orchestrator | 2025-09-17 16:25:22.151808 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-17 16:25:22.151817 | orchestrator | Wednesday 17 September 2025 16:24:09 +0000 (0:00:00.066) 0:00:57.538 *** 2025-09-17 16:25:22.151827 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.151836 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.151845 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:25:22.151855 | orchestrator | 2025-09-17 16:25:22.151868 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-17 16:25:22.151878 | orchestrator | Wednesday 17 September 2025 16:24:11 +0000 (0:00:01.890) 0:00:59.429 *** 2025-09-17 16:25:22.151887 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.151897 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.151906 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-17 16:25:22.151916 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-17 16:25:22.151926 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-17 16:25:22.151935 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.151945 | orchestrator | 2025-09-17 16:25:22.151954 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-17 16:25:22.151963 | orchestrator | Wednesday 17 September 2025 16:24:50 +0000 (0:00:38.974) 0:01:38.404 *** 2025-09-17 16:25:22.151973 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.151982 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:25:22.151991 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:25:22.152001 | orchestrator | 2025-09-17 16:25:22.152010 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-17 16:25:22.152019 | orchestrator | Wednesday 17 September 2025 16:25:15 +0000 (0:00:25.828) 0:02:04.232 *** 2025-09-17 16:25:22.152048 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:25:22.152058 | orchestrator | 2025-09-17 16:25:22.152073 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-17 16:25:22.152083 | orchestrator | Wednesday 17 September 2025 16:25:18 +0000 (0:00:02.340) 0:02:06.573 *** 2025-09-17 16:25:22.152092 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.152107 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:25:22.152116 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:25:22.152126 | orchestrator | 2025-09-17 16:25:22.152135 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-17 16:25:22.152144 | orchestrator | Wednesday 17 September 2025 16:25:18 +0000 (0:00:00.454) 0:02:07.028 *** 2025-09-17 16:25:22.152155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-17 16:25:22.152166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-17 16:25:22.152176 | orchestrator | 2025-09-17 16:25:22.152186 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-17 16:25:22.152195 | orchestrator | Wednesday 17 September 2025 16:25:21 +0000 (0:00:02.544) 0:02:09.572 *** 2025-09-17 16:25:22.152204 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:25:22.152214 | orchestrator | 2025-09-17 16:25:22.152223 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:25:22.152232 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 16:25:22.152243 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 16:25:22.152252 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-17 16:25:22.152261 | orchestrator | 2025-09-17 16:25:22.152271 | orchestrator | 2025-09-17 16:25:22.152280 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:25:22.152289 | orchestrator | Wednesday 17 September 2025 16:25:21 +0000 (0:00:00.282) 0:02:09.855 *** 2025-09-17 16:25:22.152299 | orchestrator | =============================================================================== 2025-09-17 16:25:22.152308 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.97s 2025-09-17 16:25:22.152317 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.57s 2025-09-17 16:25:22.152326 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.83s 2025-09-17 16:25:22.152336 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.54s 2025-09-17 16:25:22.152345 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.44s 2025-09-17 16:25:22.152354 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.35s 2025-09-17 16:25:22.152363 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.34s 2025-09-17 16:25:22.152372 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-09-17 16:25:22.152382 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.41s 2025-09-17 16:25:22.152391 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2025-09-17 16:25:22.152400 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-09-17 16:25:22.152409 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2025-09-17 16:25:22.152422 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2025-09-17 16:25:22.152432 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-09-17 16:25:22.152441 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2025-09-17 16:25:22.152456 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.78s 2025-09-17 16:25:22.152465 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.73s 2025-09-17 16:25:22.152474 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-09-17 16:25:22.152483 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.72s 2025-09-17 16:25:22.152492 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-09-17 16:25:22.152502 | orchestrator | 2025-09-17 16:25:22 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:22.152511 | orchestrator | 2025-09-17 16:25:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:25.188495 | orchestrator | 2025-09-17 16:25:25 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:25.188563 | orchestrator | 2025-09-17 16:25:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:28.225361 | orchestrator | 2025-09-17 16:25:28 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:28.225458 | orchestrator | 2025-09-17 16:25:28 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:31.264234 | orchestrator | 2025-09-17 16:25:31 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:31.264332 | orchestrator | 2025-09-17 16:25:31 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:34.299950 | orchestrator | 2025-09-17 16:25:34 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:34.300090 | orchestrator | 2025-09-17 16:25:34 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:37.335473 | orchestrator | 2025-09-17 16:25:37 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:37.335550 | orchestrator | 2025-09-17 16:25:37 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:40.369453 | orchestrator | 2025-09-17 16:25:40 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:40.369526 | orchestrator | 2025-09-17 16:25:40 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:43.406390 | orchestrator | 2025-09-17 16:25:43 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:43.406496 | orchestrator | 2025-09-17 16:25:43 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:46.441355 | orchestrator | 2025-09-17 16:25:46 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:46.441457 | orchestrator | 2025-09-17 16:25:46 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:49.471741 | orchestrator | 2025-09-17 16:25:49 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:49.471843 | orchestrator | 2025-09-17 16:25:49 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:52.504939 | orchestrator | 2025-09-17 16:25:52 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:52.505093 | orchestrator | 2025-09-17 16:25:52 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:55.541278 | orchestrator | 2025-09-17 16:25:55 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:55.541380 | orchestrator | 2025-09-17 16:25:55 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:25:58.589167 | orchestrator | 2025-09-17 16:25:58 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:25:58.589258 | orchestrator | 2025-09-17 16:25:58 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:01.630725 | orchestrator | 2025-09-17 16:26:01 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:01.631603 | orchestrator | 2025-09-17 16:26:01 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:04.671489 | orchestrator | 2025-09-17 16:26:04 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:04.671603 | orchestrator | 2025-09-17 16:26:04 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:07.718485 | orchestrator | 2025-09-17 16:26:07 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:07.718588 | orchestrator | 2025-09-17 16:26:07 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:10.763486 | orchestrator | 2025-09-17 16:26:10 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:10.763583 | orchestrator | 2025-09-17 16:26:10 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:13.801110 | orchestrator | 2025-09-17 16:26:13 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:13.801191 | orchestrator | 2025-09-17 16:26:13 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:16.844716 | orchestrator | 2025-09-17 16:26:16 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:16.844812 | orchestrator | 2025-09-17 16:26:16 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:19.896263 | orchestrator | 2025-09-17 16:26:19 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:19.897599 | orchestrator | 2025-09-17 16:26:19 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:22.935691 | orchestrator | 2025-09-17 16:26:22 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:22.935763 | orchestrator | 2025-09-17 16:26:22 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:25.965583 | orchestrator | 2025-09-17 16:26:25 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:25.965679 | orchestrator | 2025-09-17 16:26:25 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:29.013317 | orchestrator | 2025-09-17 16:26:29 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:29.013415 | orchestrator | 2025-09-17 16:26:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:32.056620 | orchestrator | 2025-09-17 16:26:32 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:32.056701 | orchestrator | 2025-09-17 16:26:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:35.091018 | orchestrator | 2025-09-17 16:26:35 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:35.091101 | orchestrator | 2025-09-17 16:26:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:38.126248 | orchestrator | 2025-09-17 16:26:38 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:38.126327 | orchestrator | 2025-09-17 16:26:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:41.169800 | orchestrator | 2025-09-17 16:26:41 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:41.169871 | orchestrator | 2025-09-17 16:26:41 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:44.214306 | orchestrator | 2025-09-17 16:26:44 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:44.214409 | orchestrator | 2025-09-17 16:26:44 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:47.258326 | orchestrator | 2025-09-17 16:26:47 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:47.258396 | orchestrator | 2025-09-17 16:26:47 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:50.301619 | orchestrator | 2025-09-17 16:26:50 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:50.301703 | orchestrator | 2025-09-17 16:26:50 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:53.345281 | orchestrator | 2025-09-17 16:26:53 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:53.345613 | orchestrator | 2025-09-17 16:26:53 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:56.394522 | orchestrator | 2025-09-17 16:26:56 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:56.394600 | orchestrator | 2025-09-17 16:26:56 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:26:59.436933 | orchestrator | 2025-09-17 16:26:59 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:26:59.437077 | orchestrator | 2025-09-17 16:26:59 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:02.480014 | orchestrator | 2025-09-17 16:27:02 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:02.480106 | orchestrator | 2025-09-17 16:27:02 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:05.525676 | orchestrator | 2025-09-17 16:27:05 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:05.525789 | orchestrator | 2025-09-17 16:27:05 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:08.568386 | orchestrator | 2025-09-17 16:27:08 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:08.568485 | orchestrator | 2025-09-17 16:27:08 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:11.605238 | orchestrator | 2025-09-17 16:27:11 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:11.605306 | orchestrator | 2025-09-17 16:27:11 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:14.647530 | orchestrator | 2025-09-17 16:27:14 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:14.647619 | orchestrator | 2025-09-17 16:27:14 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:17.690591 | orchestrator | 2025-09-17 16:27:17 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:17.690693 | orchestrator | 2025-09-17 16:27:17 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:20.744021 | orchestrator | 2025-09-17 16:27:20 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:20.744113 | orchestrator | 2025-09-17 16:27:20 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:23.780304 | orchestrator | 2025-09-17 16:27:23 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:23.780390 | orchestrator | 2025-09-17 16:27:23 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:26.809898 | orchestrator | 2025-09-17 16:27:26 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:26.810080 | orchestrator | 2025-09-17 16:27:26 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:29.849488 | orchestrator | 2025-09-17 16:27:29 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:29.849588 | orchestrator | 2025-09-17 16:27:29 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:32.894413 | orchestrator | 2025-09-17 16:27:32 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:32.894509 | orchestrator | 2025-09-17 16:27:32 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:35.933984 | orchestrator | 2025-09-17 16:27:35 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:35.934135 | orchestrator | 2025-09-17 16:27:35 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:38.977847 | orchestrator | 2025-09-17 16:27:38 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:38.977978 | orchestrator | 2025-09-17 16:27:38 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:42.018226 | orchestrator | 2025-09-17 16:27:42 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:42.018313 | orchestrator | 2025-09-17 16:27:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:45.059167 | orchestrator | 2025-09-17 16:27:45 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:45.059255 | orchestrator | 2025-09-17 16:27:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:48.098656 | orchestrator | 2025-09-17 16:27:48 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:48.098729 | orchestrator | 2025-09-17 16:27:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:51.143652 | orchestrator | 2025-09-17 16:27:51 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:51.143707 | orchestrator | 2025-09-17 16:27:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:54.182284 | orchestrator | 2025-09-17 16:27:54 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:54.182377 | orchestrator | 2025-09-17 16:27:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:27:57.230492 | orchestrator | 2025-09-17 16:27:57 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:27:57.230580 | orchestrator | 2025-09-17 16:27:57 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:00.268032 | orchestrator | 2025-09-17 16:28:00 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:00.268103 | orchestrator | 2025-09-17 16:28:00 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:03.312171 | orchestrator | 2025-09-17 16:28:03 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:03.312297 | orchestrator | 2025-09-17 16:28:03 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:06.357685 | orchestrator | 2025-09-17 16:28:06 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:06.357774 | orchestrator | 2025-09-17 16:28:06 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:09.403141 | orchestrator | 2025-09-17 16:28:09 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:09.403242 | orchestrator | 2025-09-17 16:28:09 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:12.441114 | orchestrator | 2025-09-17 16:28:12 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:12.441184 | orchestrator | 2025-09-17 16:28:12 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:15.477126 | orchestrator | 2025-09-17 16:28:15 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:15.477205 | orchestrator | 2025-09-17 16:28:15 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:18.499166 | orchestrator | 2025-09-17 16:28:18 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:18.499210 | orchestrator | 2025-09-17 16:28:18 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:21.519757 | orchestrator | 2025-09-17 16:28:21 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:21.519810 | orchestrator | 2025-09-17 16:28:21 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:24.546891 | orchestrator | 2025-09-17 16:28:24 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:24.547002 | orchestrator | 2025-09-17 16:28:24 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:27.589316 | orchestrator | 2025-09-17 16:28:27 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:27.589420 | orchestrator | 2025-09-17 16:28:27 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:30.627320 | orchestrator | 2025-09-17 16:28:30 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:30.627405 | orchestrator | 2025-09-17 16:28:30 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:33.668821 | orchestrator | 2025-09-17 16:28:33 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:33.668959 | orchestrator | 2025-09-17 16:28:33 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:36.711854 | orchestrator | 2025-09-17 16:28:36 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:36.711986 | orchestrator | 2025-09-17 16:28:36 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:39.754866 | orchestrator | 2025-09-17 16:28:39 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:39.755003 | orchestrator | 2025-09-17 16:28:39 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:42.795406 | orchestrator | 2025-09-17 16:28:42 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:42.795491 | orchestrator | 2025-09-17 16:28:42 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:45.838849 | orchestrator | 2025-09-17 16:28:45 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:45.838992 | orchestrator | 2025-09-17 16:28:45 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:48.877222 | orchestrator | 2025-09-17 16:28:48 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:48.877314 | orchestrator | 2025-09-17 16:28:48 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:51.918990 | orchestrator | 2025-09-17 16:28:51 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:51.919093 | orchestrator | 2025-09-17 16:28:51 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:54.955761 | orchestrator | 2025-09-17 16:28:54 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state STARTED 2025-09-17 16:28:54.955855 | orchestrator | 2025-09-17 16:28:54 | INFO  | Wait 1 second(s) until the next check 2025-09-17 16:28:57.998847 | orchestrator | 2025-09-17 16:28:57 | INFO  | Task aa632587-ee3d-4b85-9dd4-ced6559d778a is in state SUCCESS 2025-09-17 16:28:57.999795 | orchestrator | 2025-09-17 16:28:57.999832 | orchestrator | 2025-09-17 16:28:57.999845 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:28:57.999857 | orchestrator | 2025-09-17 16:28:57.999868 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-17 16:28:57.999993 | orchestrator | Wednesday 17 September 2025 16:20:49 +0000 (0:00:00.276) 0:00:00.276 *** 2025-09-17 16:28:58.000009 | orchestrator | changed: [testbed-manager] 2025-09-17 16:28:58.000036 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000048 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.000059 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.000069 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.000124 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.000136 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.000146 | orchestrator | 2025-09-17 16:28:58.000158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:28:58.000168 | orchestrator | Wednesday 17 September 2025 16:20:50 +0000 (0:00:01.046) 0:00:01.322 *** 2025-09-17 16:28:58.000179 | orchestrator | changed: [testbed-manager] 2025-09-17 16:28:58.000293 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000318 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.000331 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.000341 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.000352 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.000362 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.000373 | orchestrator | 2025-09-17 16:28:58.000384 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:28:58.000395 | orchestrator | Wednesday 17 September 2025 16:20:50 +0000 (0:00:00.818) 0:00:02.141 *** 2025-09-17 16:28:58.000408 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-17 16:28:58.000421 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-17 16:28:58.000433 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-17 16:28:58.000446 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-17 16:28:58.000458 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-17 16:28:58.000471 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-17 16:28:58.000484 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-17 16:28:58.000497 | orchestrator | 2025-09-17 16:28:58.000510 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-17 16:28:58.000522 | orchestrator | 2025-09-17 16:28:58.000534 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-17 16:28:58.000546 | orchestrator | Wednesday 17 September 2025 16:20:51 +0000 (0:00:01.081) 0:00:03.222 *** 2025-09-17 16:28:58.000559 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.000572 | orchestrator | 2025-09-17 16:28:58.000585 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-17 16:28:58.000597 | orchestrator | Wednesday 17 September 2025 16:20:52 +0000 (0:00:00.918) 0:00:04.140 *** 2025-09-17 16:28:58.000610 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-17 16:28:58.000622 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-17 16:28:58.000650 | orchestrator | 2025-09-17 16:28:58.000663 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-17 16:28:58.000675 | orchestrator | Wednesday 17 September 2025 16:20:57 +0000 (0:00:04.598) 0:00:08.738 *** 2025-09-17 16:28:58.000687 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:28:58.000699 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-17 16:28:58.000711 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000724 | orchestrator | 2025-09-17 16:28:58.000736 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-17 16:28:58.000749 | orchestrator | Wednesday 17 September 2025 16:21:01 +0000 (0:00:04.225) 0:00:12.964 *** 2025-09-17 16:28:58.000819 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000832 | orchestrator | 2025-09-17 16:28:58.000843 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-17 16:28:58.000854 | orchestrator | Wednesday 17 September 2025 16:21:02 +0000 (0:00:00.577) 0:00:13.541 *** 2025-09-17 16:28:58.000876 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000887 | orchestrator | 2025-09-17 16:28:58.000897 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-17 16:28:58.000954 | orchestrator | Wednesday 17 September 2025 16:21:03 +0000 (0:00:01.394) 0:00:14.936 *** 2025-09-17 16:28:58.000967 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.000977 | orchestrator | 2025-09-17 16:28:58.000988 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 16:28:58.000999 | orchestrator | Wednesday 17 September 2025 16:21:06 +0000 (0:00:02.940) 0:00:17.876 *** 2025-09-17 16:28:58.001009 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001020 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001031 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001041 | orchestrator | 2025-09-17 16:28:58.001052 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-17 16:28:58.001063 | orchestrator | Wednesday 17 September 2025 16:21:07 +0000 (0:00:00.449) 0:00:18.326 *** 2025-09-17 16:28:58.001074 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.001084 | orchestrator | 2025-09-17 16:28:58.001095 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-17 16:28:58.001106 | orchestrator | Wednesday 17 September 2025 16:21:41 +0000 (0:00:34.425) 0:00:52.751 *** 2025-09-17 16:28:58.001116 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.001127 | orchestrator | 2025-09-17 16:28:58.001138 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 16:28:58.001148 | orchestrator | Wednesday 17 September 2025 16:21:56 +0000 (0:00:14.822) 0:01:07.573 *** 2025-09-17 16:28:58.001159 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.001170 | orchestrator | 2025-09-17 16:28:58.001180 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 16:28:58.001191 | orchestrator | Wednesday 17 September 2025 16:22:08 +0000 (0:00:12.162) 0:01:19.735 *** 2025-09-17 16:28:58.001216 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.001227 | orchestrator | 2025-09-17 16:28:58.001238 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-17 16:28:58.001249 | orchestrator | Wednesday 17 September 2025 16:22:09 +0000 (0:00:00.817) 0:01:20.553 *** 2025-09-17 16:28:58.001260 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001270 | orchestrator | 2025-09-17 16:28:58.001288 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 16:28:58.001300 | orchestrator | Wednesday 17 September 2025 16:22:09 +0000 (0:00:00.406) 0:01:20.959 *** 2025-09-17 16:28:58.001310 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.001321 | orchestrator | 2025-09-17 16:28:58.001332 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-17 16:28:58.001342 | orchestrator | Wednesday 17 September 2025 16:22:10 +0000 (0:00:00.456) 0:01:21.415 *** 2025-09-17 16:28:58.001353 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.001376 | orchestrator | 2025-09-17 16:28:58.001387 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-17 16:28:58.001398 | orchestrator | Wednesday 17 September 2025 16:22:28 +0000 (0:00:18.341) 0:01:39.756 *** 2025-09-17 16:28:58.001409 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001419 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001430 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001440 | orchestrator | 2025-09-17 16:28:58.001451 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-17 16:28:58.001462 | orchestrator | 2025-09-17 16:28:58.001472 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-17 16:28:58.001483 | orchestrator | Wednesday 17 September 2025 16:22:28 +0000 (0:00:00.307) 0:01:40.064 *** 2025-09-17 16:28:58.001494 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.001512 | orchestrator | 2025-09-17 16:28:58.001523 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-17 16:28:58.001534 | orchestrator | Wednesday 17 September 2025 16:22:29 +0000 (0:00:00.573) 0:01:40.638 *** 2025-09-17 16:28:58.001544 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001555 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001566 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.001576 | orchestrator | 2025-09-17 16:28:58.001587 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-17 16:28:58.001598 | orchestrator | Wednesday 17 September 2025 16:22:31 +0000 (0:00:02.309) 0:01:42.947 *** 2025-09-17 16:28:58.001609 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001620 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001631 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.001641 | orchestrator | 2025-09-17 16:28:58.001652 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-17 16:28:58.001663 | orchestrator | Wednesday 17 September 2025 16:22:33 +0000 (0:00:02.189) 0:01:45.136 *** 2025-09-17 16:28:58.001674 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001685 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001695 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001706 | orchestrator | 2025-09-17 16:28:58.001717 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-17 16:28:58.001728 | orchestrator | Wednesday 17 September 2025 16:22:34 +0000 (0:00:00.410) 0:01:45.546 *** 2025-09-17 16:28:58.001738 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 16:28:58.001749 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001760 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 16:28:58.001770 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001781 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-17 16:28:58.001792 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-17 16:28:58.001803 | orchestrator | 2025-09-17 16:28:58.001813 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-17 16:28:58.001824 | orchestrator | Wednesday 17 September 2025 16:22:43 +0000 (0:00:08.762) 0:01:54.309 *** 2025-09-17 16:28:58.001835 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001846 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001856 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001867 | orchestrator | 2025-09-17 16:28:58.001878 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-17 16:28:58.001889 | orchestrator | Wednesday 17 September 2025 16:22:43 +0000 (0:00:00.342) 0:01:54.651 *** 2025-09-17 16:28:58.001899 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-17 16:28:58.001926 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.001937 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-17 16:28:58.001947 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.001958 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-17 16:28:58.001968 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.001979 | orchestrator | 2025-09-17 16:28:58.001990 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-17 16:28:58.002000 | orchestrator | Wednesday 17 September 2025 16:22:44 +0000 (0:00:00.617) 0:01:55.269 *** 2025-09-17 16:28:58.002011 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.002250 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.002275 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.002294 | orchestrator | 2025-09-17 16:28:58.002314 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-17 16:28:58.002331 | orchestrator | Wednesday 17 September 2025 16:22:44 +0000 (0:00:00.516) 0:01:55.786 *** 2025-09-17 16:28:58.002350 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.002369 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.002400 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.002418 | orchestrator | 2025-09-17 16:28:58.002437 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-17 16:28:58.002455 | orchestrator | Wednesday 17 September 2025 16:22:45 +0000 (0:00:01.002) 0:01:56.788 *** 2025-09-17 16:28:58.002473 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.002491 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.002530 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.002550 | orchestrator | 2025-09-17 16:28:58.002571 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-17 16:28:58.002592 | orchestrator | Wednesday 17 September 2025 16:22:47 +0000 (0:00:02.125) 0:01:58.914 *** 2025-09-17 16:28:58.002612 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.002630 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.002658 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.002680 | orchestrator | 2025-09-17 16:28:58.002700 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 16:28:58.002720 | orchestrator | Wednesday 17 September 2025 16:23:06 +0000 (0:00:19.009) 0:02:17.924 *** 2025-09-17 16:28:58.002742 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.002766 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.002788 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.002883 | orchestrator | 2025-09-17 16:28:58.002930 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 16:28:58.002951 | orchestrator | Wednesday 17 September 2025 16:23:19 +0000 (0:00:12.505) 0:02:30.430 *** 2025-09-17 16:28:58.002983 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.003004 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.003078 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.003098 | orchestrator | 2025-09-17 16:28:58.003117 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-17 16:28:58.003135 | orchestrator | Wednesday 17 September 2025 16:23:20 +0000 (0:00:00.884) 0:02:31.314 *** 2025-09-17 16:28:58.003154 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.003172 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.003190 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.003208 | orchestrator | 2025-09-17 16:28:58.003226 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-17 16:28:58.003244 | orchestrator | Wednesday 17 September 2025 16:23:32 +0000 (0:00:11.975) 0:02:43.289 *** 2025-09-17 16:28:58.003262 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.003280 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.003298 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.003316 | orchestrator | 2025-09-17 16:28:58.003335 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-17 16:28:58.003357 | orchestrator | Wednesday 17 September 2025 16:23:33 +0000 (0:00:01.429) 0:02:44.718 *** 2025-09-17 16:28:58.003377 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.003395 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.003413 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.003431 | orchestrator | 2025-09-17 16:28:58.003449 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-17 16:28:58.003467 | orchestrator | 2025-09-17 16:28:58.003486 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 16:28:58.003506 | orchestrator | Wednesday 17 September 2025 16:23:33 +0000 (0:00:00.323) 0:02:45.042 *** 2025-09-17 16:28:58.003524 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.003543 | orchestrator | 2025-09-17 16:28:58.003561 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-17 16:28:58.003579 | orchestrator | Wednesday 17 September 2025 16:23:34 +0000 (0:00:00.523) 0:02:45.565 *** 2025-09-17 16:28:58.003596 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-17 16:28:58.003630 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-17 16:28:58.003650 | orchestrator | 2025-09-17 16:28:58.003667 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-17 16:28:58.003684 | orchestrator | Wednesday 17 September 2025 16:23:37 +0000 (0:00:03.400) 0:02:48.966 *** 2025-09-17 16:28:58.003703 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-17 16:28:58.003723 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-17 16:28:58.003742 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-17 16:28:58.003754 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-17 16:28:58.003765 | orchestrator | 2025-09-17 16:28:58.003776 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-17 16:28:58.003787 | orchestrator | Wednesday 17 September 2025 16:23:44 +0000 (0:00:06.714) 0:02:55.681 *** 2025-09-17 16:28:58.003797 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-17 16:28:58.003808 | orchestrator | 2025-09-17 16:28:58.003819 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-17 16:28:58.003829 | orchestrator | Wednesday 17 September 2025 16:23:47 +0000 (0:00:03.351) 0:02:59.032 *** 2025-09-17 16:28:58.003840 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-17 16:28:58.003850 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-17 16:28:58.003884 | orchestrator | 2025-09-17 16:28:58.003896 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-17 16:28:58.003943 | orchestrator | Wednesday 17 September 2025 16:23:51 +0000 (0:00:03.955) 0:03:02.987 *** 2025-09-17 16:28:58.003955 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-17 16:28:58.003965 | orchestrator | 2025-09-17 16:28:58.003976 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-17 16:28:58.003987 | orchestrator | Wednesday 17 September 2025 16:23:55 +0000 (0:00:03.335) 0:03:06.322 *** 2025-09-17 16:28:58.003997 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-17 16:28:58.004008 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-17 16:28:58.004018 | orchestrator | 2025-09-17 16:28:58.004029 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-17 16:28:58.004061 | orchestrator | Wednesday 17 September 2025 16:24:02 +0000 (0:00:07.924) 0:03:14.247 *** 2025-09-17 16:28:58.004087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004203 | orchestrator | 2025-09-17 16:28:58.004216 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-17 16:28:58.004229 | orchestrator | Wednesday 17 September 2025 16:24:04 +0000 (0:00:01.384) 0:03:15.632 *** 2025-09-17 16:28:58.004242 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.004255 | orchestrator | 2025-09-17 16:28:58.004267 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-17 16:28:58.004280 | orchestrator | Wednesday 17 September 2025 16:24:04 +0000 (0:00:00.128) 0:03:15.761 *** 2025-09-17 16:28:58.004292 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.004305 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.004317 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.004330 | orchestrator | 2025-09-17 16:28:58.004343 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-17 16:28:58.004355 | orchestrator | Wednesday 17 September 2025 16:24:04 +0000 (0:00:00.467) 0:03:16.228 *** 2025-09-17 16:28:58.004368 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-17 16:28:58.004380 | orchestrator | 2025-09-17 16:28:58.004392 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-17 16:28:58.004405 | orchestrator | Wednesday 17 September 2025 16:24:05 +0000 (0:00:00.696) 0:03:16.924 *** 2025-09-17 16:28:58.004417 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.004429 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.004442 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.004454 | orchestrator | 2025-09-17 16:28:58.004466 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-17 16:28:58.004479 | orchestrator | Wednesday 17 September 2025 16:24:05 +0000 (0:00:00.325) 0:03:17.250 *** 2025-09-17 16:28:58.004491 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.004504 | orchestrator | 2025-09-17 16:28:58.004514 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-17 16:28:58.004525 | orchestrator | Wednesday 17 September 2025 16:24:06 +0000 (0:00:00.523) 0:03:17.774 *** 2025-09-17 16:28:58.004544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.004594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.004637 | orchestrator | 2025-09-17 16:28:58.004660 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-17 16:28:58.004678 | orchestrator | Wednesday 17 September 2025 16:24:09 +0000 (0:00:02.591) 0:03:20.365 *** 2025-09-17 16:28:58.004690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004713 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.004725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004754 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.004779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004802 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.004813 | orchestrator | 2025-09-17 16:28:58.004824 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-17 16:28:58.004835 | orchestrator | Wednesday 17 September 2025 16:24:09 +0000 (0:00:00.570) 0:03:20.935 *** 2025-09-17 16:28:58.004846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004875 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.004900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004943 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.004954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.004966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.004977 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.004994 | orchestrator | 2025-09-17 16:28:58.005005 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-17 16:28:58.005016 | orchestrator | Wednesday 17 September 2025 16:24:10 +0000 (0:00:00.796) 0:03:21.732 *** 2025-09-17 16:28:58.005040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005131 | orchestrator | 2025-09-17 16:28:58.005142 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-17 16:28:58.005153 | orchestrator | Wednesday 17 September 2025 16:24:13 +0000 (0:00:02.538) 0:03:24.270 *** 2025-09-17 16:28:58.005164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005254 | orchestrator | 2025-09-17 16:28:58.005265 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-17 16:28:58.005275 | orchestrator | Wednesday 17 September 2025 16:24:18 +0000 (0:00:05.233) 0:03:29.504 *** 2025-09-17 16:28:58.005293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.005317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.005328 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.005340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.005352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.005363 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.005374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-17 16:28:58.005408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.005420 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.005431 | orchestrator | 2025-09-17 16:28:58.005442 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-17 16:28:58.005453 | orchestrator | Wednesday 17 September 2025 16:24:18 +0000 (0:00:00.575) 0:03:30.080 *** 2025-09-17 16:28:58.005463 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.005474 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.005484 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.005495 | orchestrator | 2025-09-17 16:28:58.005505 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-17 16:28:58.005516 | orchestrator | Wednesday 17 September 2025 16:24:20 +0000 (0:00:01.703) 0:03:31.784 *** 2025-09-17 16:28:58.005527 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.005537 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.005547 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.005558 | orchestrator | 2025-09-17 16:28:58.005568 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-17 16:28:58.005579 | orchestrator | Wednesday 17 September 2025 16:24:20 +0000 (0:00:00.468) 0:03:32.253 *** 2025-09-17 16:28:58.005590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-17 16:28:58.005648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.005688 | orchestrator | 2025-09-17 16:28:58.005699 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 16:28:58.005710 | orchestrator | Wednesday 17 September 2025 16:24:22 +0000 (0:00:01.817) 0:03:34.070 *** 2025-09-17 16:28:58.005721 | orchestrator | 2025-09-17 16:28:58.005732 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 16:28:58.005743 | orchestrator | Wednesday 17 September 2025 16:24:22 +0000 (0:00:00.127) 0:03:34.198 *** 2025-09-17 16:28:58.005753 | orchestrator | 2025-09-17 16:28:58.005764 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-17 16:28:58.005774 | orchestrator | Wednesday 17 September 2025 16:24:23 +0000 (0:00:00.144) 0:03:34.343 *** 2025-09-17 16:28:58.005785 | orchestrator | 2025-09-17 16:28:58.005795 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-17 16:28:58.005806 | orchestrator | Wednesday 17 September 2025 16:24:23 +0000 (0:00:00.125) 0:03:34.469 *** 2025-09-17 16:28:58.005817 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.005828 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.005838 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.005849 | orchestrator | 2025-09-17 16:28:58.005860 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-17 16:28:58.005870 | orchestrator | Wednesday 17 September 2025 16:24:43 +0000 (0:00:19.909) 0:03:54.378 *** 2025-09-17 16:28:58.005881 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.005892 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.005902 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.005928 | orchestrator | 2025-09-17 16:28:58.005939 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-17 16:28:58.005950 | orchestrator | 2025-09-17 16:28:58.005961 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 16:28:58.005971 | orchestrator | Wednesday 17 September 2025 16:24:48 +0000 (0:00:05.486) 0:03:59.864 *** 2025-09-17 16:28:58.005982 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.005994 | orchestrator | 2025-09-17 16:28:58.006010 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 16:28:58.006055 | orchestrator | Wednesday 17 September 2025 16:24:49 +0000 (0:00:01.112) 0:04:00.976 *** 2025-09-17 16:28:58.006066 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.006077 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.006087 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.006098 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.006109 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.006125 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.006136 | orchestrator | 2025-09-17 16:28:58.006147 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-17 16:28:58.006157 | orchestrator | Wednesday 17 September 2025 16:24:50 +0000 (0:00:00.738) 0:04:01.715 *** 2025-09-17 16:28:58.006168 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.006179 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.006189 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.006200 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:28:58.006210 | orchestrator | 2025-09-17 16:28:58.006221 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-17 16:28:58.006232 | orchestrator | Wednesday 17 September 2025 16:24:51 +0000 (0:00:00.812) 0:04:02.527 *** 2025-09-17 16:28:58.006243 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-17 16:28:58.006261 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-17 16:28:58.006271 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-17 16:28:58.006282 | orchestrator | 2025-09-17 16:28:58.006293 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-17 16:28:58.006303 | orchestrator | Wednesday 17 September 2025 16:24:52 +0000 (0:00:01.057) 0:04:03.585 *** 2025-09-17 16:28:58.006314 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-17 16:28:58.006325 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-17 16:28:58.006336 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-17 16:28:58.006346 | orchestrator | 2025-09-17 16:28:58.006357 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-17 16:28:58.006368 | orchestrator | Wednesday 17 September 2025 16:24:53 +0000 (0:00:01.227) 0:04:04.813 *** 2025-09-17 16:28:58.006378 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-17 16:28:58.006389 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.006399 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-17 16:28:58.006410 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.006421 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-17 16:28:58.006431 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.006442 | orchestrator | 2025-09-17 16:28:58.006453 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-17 16:28:58.006464 | orchestrator | Wednesday 17 September 2025 16:24:54 +0000 (0:00:00.546) 0:04:05.359 *** 2025-09-17 16:28:58.006475 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 16:28:58.006486 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:28:58.006496 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:28:58.006507 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 16:28:58.006518 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.006528 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:28:58.006539 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:28:58.006550 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.006560 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-17 16:28:58.006571 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-17 16:28:58.006581 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.006592 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-17 16:28:58.006603 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 16:28:58.006614 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 16:28:58.006624 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-17 16:28:58.006635 | orchestrator | 2025-09-17 16:28:58.006645 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-17 16:28:58.006656 | orchestrator | Wednesday 17 September 2025 16:24:55 +0000 (0:00:01.286) 0:04:06.646 *** 2025-09-17 16:28:58.006667 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.006677 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.006688 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.006698 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.006709 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.006720 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.006730 | orchestrator | 2025-09-17 16:28:58.006741 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-17 16:28:58.006752 | orchestrator | Wednesday 17 September 2025 16:24:56 +0000 (0:00:01.196) 0:04:07.842 *** 2025-09-17 16:28:58.006768 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.006779 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.006789 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.006800 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.006811 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.006821 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.006832 | orchestrator | 2025-09-17 16:28:58.006842 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-17 16:28:58.006853 | orchestrator | Wednesday 17 September 2025 16:24:58 +0000 (0:00:01.791) 0:04:09.634 *** 2025-09-17 16:28:58.006877 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.006891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.006903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.006970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.006982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007149 | orchestrator | 2025-09-17 16:28:58.007161 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 16:28:58.007171 | orchestrator | Wednesday 17 September 2025 16:25:00 +0000 (0:00:02.449) 0:04:12.085 *** 2025-09-17 16:28:58.007183 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:28:58.007194 | orchestrator | 2025-09-17 16:28:58.007204 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-17 16:28:58.007215 | orchestrator | Wednesday 17 September 2025 16:25:01 +0000 (0:00:01.176) 0:04:13.261 *** 2025-09-17 16:28:58.007233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.007675 | orchestrator | 2025-09-17 16:28:58.007685 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-17 16:28:58.007695 | orchestrator | Wednesday 17 September 2025 16:25:05 +0000 (0:00:03.731) 0:04:16.993 *** 2025-09-17 16:28:58.007705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.007715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.007731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.007741 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.007778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.007795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.007805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.007815 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.007825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.007841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.007851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.007861 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.007904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.007995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008007 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.008019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.008038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008049 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.008061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.008072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008084 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.008094 | orchestrator | 2025-09-17 16:28:58.008105 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-17 16:28:58.008116 | orchestrator | Wednesday 17 September 2025 16:25:07 +0000 (0:00:01.452) 0:04:18.446 *** 2025-09-17 16:28:58.008164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.008178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.008190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008209 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.008221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.008232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.008275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008288 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.008299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.008315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.008325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008334 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.008344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.008353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008362 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.008395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.008405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008419 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.008427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.008435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.008443 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.008451 | orchestrator | 2025-09-17 16:28:58.008458 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 16:28:58.008466 | orchestrator | Wednesday 17 September 2025 16:25:08 +0000 (0:00:01.743) 0:04:20.189 *** 2025-09-17 16:28:58.008474 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.008481 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.008489 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.008497 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-17 16:28:58.008505 | orchestrator | 2025-09-17 16:28:58.008512 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-17 16:28:58.008520 | orchestrator | Wednesday 17 September 2025 16:25:09 +0000 (0:00:00.884) 0:04:21.074 *** 2025-09-17 16:28:58.008528 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 16:28:58.008536 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 16:28:58.008543 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 16:28:58.008551 | orchestrator | 2025-09-17 16:28:58.008559 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-17 16:28:58.008566 | orchestrator | Wednesday 17 September 2025 16:25:10 +0000 (0:00:00.826) 0:04:21.900 *** 2025-09-17 16:28:58.008574 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 16:28:58.008582 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-17 16:28:58.008589 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-17 16:28:58.008597 | orchestrator | 2025-09-17 16:28:58.008604 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-17 16:28:58.008612 | orchestrator | Wednesday 17 September 2025 16:25:11 +0000 (0:00:01.077) 0:04:22.978 *** 2025-09-17 16:28:58.008620 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:28:58.008628 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:28:58.008635 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:28:58.008643 | orchestrator | 2025-09-17 16:28:58.008651 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-17 16:28:58.008659 | orchestrator | Wednesday 17 September 2025 16:25:12 +0000 (0:00:00.523) 0:04:23.502 *** 2025-09-17 16:28:58.008666 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:28:58.008674 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:28:58.008682 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:28:58.008689 | orchestrator | 2025-09-17 16:28:58.008697 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-17 16:28:58.008705 | orchestrator | Wednesday 17 September 2025 16:25:12 +0000 (0:00:00.508) 0:04:24.010 *** 2025-09-17 16:28:58.008717 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 16:28:58.008746 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 16:28:58.008755 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 16:28:58.008763 | orchestrator | 2025-09-17 16:28:58.008771 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-17 16:28:58.008778 | orchestrator | Wednesday 17 September 2025 16:25:13 +0000 (0:00:01.174) 0:04:25.184 *** 2025-09-17 16:28:58.008793 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 16:28:58.008801 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 16:28:58.008808 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 16:28:58.008816 | orchestrator | 2025-09-17 16:28:58.008824 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-17 16:28:58.008832 | orchestrator | Wednesday 17 September 2025 16:25:15 +0000 (0:00:01.375) 0:04:26.559 *** 2025-09-17 16:28:58.008840 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-17 16:28:58.008847 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-17 16:28:58.008855 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-17 16:28:58.008863 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-17 16:28:58.008871 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-17 16:28:58.008878 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-17 16:28:58.008886 | orchestrator | 2025-09-17 16:28:58.008893 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-17 16:28:58.008901 | orchestrator | Wednesday 17 September 2025 16:25:18 +0000 (0:00:03.636) 0:04:30.196 *** 2025-09-17 16:28:58.008925 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.008933 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.008940 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.008948 | orchestrator | 2025-09-17 16:28:58.008956 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-17 16:28:58.008964 | orchestrator | Wednesday 17 September 2025 16:25:19 +0000 (0:00:00.302) 0:04:30.499 *** 2025-09-17 16:28:58.008972 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.008979 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.008987 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.008995 | orchestrator | 2025-09-17 16:28:58.009002 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-17 16:28:58.009010 | orchestrator | Wednesday 17 September 2025 16:25:19 +0000 (0:00:00.283) 0:04:30.782 *** 2025-09-17 16:28:58.009018 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.009026 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.009033 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.009041 | orchestrator | 2025-09-17 16:28:58.009049 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-17 16:28:58.009056 | orchestrator | Wednesday 17 September 2025 16:25:21 +0000 (0:00:01.688) 0:04:32.471 *** 2025-09-17 16:28:58.009064 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 16:28:58.009073 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 16:28:58.009081 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-17 16:28:58.009089 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 16:28:58.009097 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 16:28:58.009105 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-17 16:28:58.009118 | orchestrator | 2025-09-17 16:28:58.009126 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-17 16:28:58.009134 | orchestrator | Wednesday 17 September 2025 16:25:24 +0000 (0:00:03.182) 0:04:35.654 *** 2025-09-17 16:28:58.009142 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:28:58.009150 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:28:58.009157 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:28:58.009165 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-17 16:28:58.009173 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.009181 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-17 16:28:58.009188 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.009196 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-17 16:28:58.009204 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.009211 | orchestrator | 2025-09-17 16:28:58.009219 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-17 16:28:58.009227 | orchestrator | Wednesday 17 September 2025 16:25:27 +0000 (0:00:03.208) 0:04:38.862 *** 2025-09-17 16:28:58.009235 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.009242 | orchestrator | 2025-09-17 16:28:58.009250 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-17 16:28:58.009258 | orchestrator | Wednesday 17 September 2025 16:25:27 +0000 (0:00:00.128) 0:04:38.991 *** 2025-09-17 16:28:58.009266 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.009274 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.009281 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.009289 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.009297 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.009304 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.009312 | orchestrator | 2025-09-17 16:28:58.009320 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-17 16:28:58.009351 | orchestrator | Wednesday 17 September 2025 16:25:28 +0000 (0:00:00.714) 0:04:39.705 *** 2025-09-17 16:28:58.009360 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-17 16:28:58.009368 | orchestrator | 2025-09-17 16:28:58.009376 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-17 16:28:58.009384 | orchestrator | Wednesday 17 September 2025 16:25:29 +0000 (0:00:00.671) 0:04:40.377 *** 2025-09-17 16:28:58.009395 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.009403 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.009411 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.009419 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.009426 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.009434 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.009441 | orchestrator | 2025-09-17 16:28:58.009449 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-17 16:28:58.009457 | orchestrator | Wednesday 17 September 2025 16:25:29 +0000 (0:00:00.585) 0:04:40.962 *** 2025-09-17 16:28:58.009465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009623 | orchestrator | 2025-09-17 16:28:58.009631 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-17 16:28:58.009639 | orchestrator | Wednesday 17 September 2025 16:25:33 +0000 (0:00:03.927) 0:04:44.890 *** 2025-09-17 16:28:58.009647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.009663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.009671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.009685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.009693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.009701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.009715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.009809 | orchestrator | 2025-09-17 16:28:58.009817 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-17 16:28:58.009825 | orchestrator | Wednesday 17 September 2025 16:25:39 +0000 (0:00:05.764) 0:04:50.654 *** 2025-09-17 16:28:58.009833 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.009840 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.009848 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.009856 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.009863 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.009871 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.009879 | orchestrator | 2025-09-17 16:28:58.009886 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-17 16:28:58.009894 | orchestrator | Wednesday 17 September 2025 16:25:40 +0000 (0:00:01.384) 0:04:52.039 *** 2025-09-17 16:28:58.009902 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 16:28:58.009924 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 16:28:58.009932 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-17 16:28:58.009940 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 16:28:58.009948 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.009956 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 16:28:58.009963 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 16:28:58.009971 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-17 16:28:58.009979 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 16:28:58.009987 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.009995 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-17 16:28:58.010002 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010010 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 16:28:58.010043 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 16:28:58.010051 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-17 16:28:58.010059 | orchestrator | 2025-09-17 16:28:58.010067 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-17 16:28:58.010075 | orchestrator | Wednesday 17 September 2025 16:25:44 +0000 (0:00:03.380) 0:04:55.419 *** 2025-09-17 16:28:58.010083 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.010091 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.010099 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.010107 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010115 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010123 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010136 | orchestrator | 2025-09-17 16:28:58.010144 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-17 16:28:58.010152 | orchestrator | Wednesday 17 September 2025 16:25:44 +0000 (0:00:00.733) 0:04:56.153 *** 2025-09-17 16:28:58.010160 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 16:28:58.010168 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 16:28:58.010181 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-17 16:28:58.010189 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 16:28:58.010197 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010212 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 16:28:58.010220 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-17 16:28:58.010228 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010236 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010243 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010251 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010259 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010267 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010275 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-17 16:28:58.010283 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010291 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010299 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010307 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010317 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010331 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010345 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-17 16:28:58.010358 | orchestrator | 2025-09-17 16:28:58.010372 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-17 16:28:58.010380 | orchestrator | Wednesday 17 September 2025 16:25:49 +0000 (0:00:05.032) 0:05:01.186 *** 2025-09-17 16:28:58.010388 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:28:58.010396 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:28:58.010404 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-17 16:28:58.010412 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 16:28:58.010420 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 16:28:58.010427 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-17 16:28:58.010441 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:28:58.010449 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:28:58.010456 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-17 16:28:58.010464 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:28:58.010472 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:28:58.010479 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-17 16:28:58.010487 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 16:28:58.010495 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 16:28:58.010503 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010510 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010518 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-17 16:28:58.010526 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010534 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:28:58.010542 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:28:58.010549 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-17 16:28:58.010557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:28:58.010565 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:28:58.010577 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-17 16:28:58.010586 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:28:58.010593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:28:58.010605 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-17 16:28:58.010613 | orchestrator | 2025-09-17 16:28:58.010621 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-17 16:28:58.010628 | orchestrator | Wednesday 17 September 2025 16:25:56 +0000 (0:00:06.715) 0:05:07.901 *** 2025-09-17 16:28:58.010636 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.010644 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.010652 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.010660 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010667 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010675 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010683 | orchestrator | 2025-09-17 16:28:58.010690 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-17 16:28:58.010698 | orchestrator | Wednesday 17 September 2025 16:25:57 +0000 (0:00:00.539) 0:05:08.441 *** 2025-09-17 16:28:58.010706 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.010714 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.010721 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.010729 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010737 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010744 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010752 | orchestrator | 2025-09-17 16:28:58.010760 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-17 16:28:58.010768 | orchestrator | Wednesday 17 September 2025 16:25:57 +0000 (0:00:00.738) 0:05:09.180 *** 2025-09-17 16:28:58.010775 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.010783 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.010796 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.010803 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.010811 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.010818 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.010826 | orchestrator | 2025-09-17 16:28:58.010834 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-17 16:28:58.010842 | orchestrator | Wednesday 17 September 2025 16:25:59 +0000 (0:00:01.765) 0:05:10.945 *** 2025-09-17 16:28:58.010850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.010859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.010867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.010875 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.010891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.010900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.010955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.010964 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.010973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-17 16:28:58.010981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-17 16:28:58.011000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.011009 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.011018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.011032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.011040 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.011048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.011056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.011064 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.011072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-17 16:28:58.011084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-17 16:28:58.011093 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.011100 | orchestrator | 2025-09-17 16:28:58.011108 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-17 16:28:58.011120 | orchestrator | Wednesday 17 September 2025 16:26:01 +0000 (0:00:01.579) 0:05:12.525 *** 2025-09-17 16:28:58.011128 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-17 16:28:58.011136 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011149 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.011157 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-17 16:28:58.011165 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011173 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.011180 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-17 16:28:58.011188 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011196 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.011204 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-17 16:28:58.011212 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011219 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.011227 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-17 16:28:58.011235 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011243 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.011250 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-17 16:28:58.011258 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-17 16:28:58.011266 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.011274 | orchestrator | 2025-09-17 16:28:58.011282 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-17 16:28:58.011289 | orchestrator | Wednesday 17 September 2025 16:26:01 +0000 (0:00:00.610) 0:05:13.136 *** 2025-09-17 16:28:58.011298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-17 16:28:58.011479 | orchestrator | 2025-09-17 16:28:58.011486 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-17 16:28:58.011493 | orchestrator | Wednesday 17 September 2025 16:26:04 +0000 (0:00:02.967) 0:05:16.104 *** 2025-09-17 16:28:58.011499 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.011506 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.011512 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.011522 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.011529 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.011535 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.011542 | orchestrator | 2025-09-17 16:28:58.011548 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011555 | orchestrator | Wednesday 17 September 2025 16:26:05 +0000 (0:00:00.582) 0:05:16.687 *** 2025-09-17 16:28:58.011561 | orchestrator | 2025-09-17 16:28:58.011574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011580 | orchestrator | Wednesday 17 September 2025 16:26:05 +0000 (0:00:00.135) 0:05:16.823 *** 2025-09-17 16:28:58.011587 | orchestrator | 2025-09-17 16:28:58.011593 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011600 | orchestrator | Wednesday 17 September 2025 16:26:05 +0000 (0:00:00.306) 0:05:17.129 *** 2025-09-17 16:28:58.011606 | orchestrator | 2025-09-17 16:28:58.011613 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011619 | orchestrator | Wednesday 17 September 2025 16:26:05 +0000 (0:00:00.127) 0:05:17.256 *** 2025-09-17 16:28:58.011626 | orchestrator | 2025-09-17 16:28:58.011632 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011638 | orchestrator | Wednesday 17 September 2025 16:26:06 +0000 (0:00:00.130) 0:05:17.387 *** 2025-09-17 16:28:58.011645 | orchestrator | 2025-09-17 16:28:58.011652 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-17 16:28:58.011658 | orchestrator | Wednesday 17 September 2025 16:26:06 +0000 (0:00:00.119) 0:05:17.506 *** 2025-09-17 16:28:58.011664 | orchestrator | 2025-09-17 16:28:58.011671 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-17 16:28:58.011677 | orchestrator | Wednesday 17 September 2025 16:26:06 +0000 (0:00:00.135) 0:05:17.642 *** 2025-09-17 16:28:58.011684 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.011690 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.011697 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.011703 | orchestrator | 2025-09-17 16:28:58.011710 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-17 16:28:58.011716 | orchestrator | Wednesday 17 September 2025 16:26:18 +0000 (0:00:11.882) 0:05:29.524 *** 2025-09-17 16:28:58.011722 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.011729 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.011735 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.011742 | orchestrator | 2025-09-17 16:28:58.011748 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-17 16:28:58.011755 | orchestrator | Wednesday 17 September 2025 16:26:30 +0000 (0:00:12.260) 0:05:41.785 *** 2025-09-17 16:28:58.011761 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.011767 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.011774 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.011780 | orchestrator | 2025-09-17 16:28:58.011787 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-17 16:28:58.011793 | orchestrator | Wednesday 17 September 2025 16:26:49 +0000 (0:00:19.355) 0:06:01.140 *** 2025-09-17 16:28:58.011800 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.011810 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.011817 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.011823 | orchestrator | 2025-09-17 16:28:58.011830 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-17 16:28:58.011837 | orchestrator | Wednesday 17 September 2025 16:27:25 +0000 (0:00:35.535) 0:06:36.676 *** 2025-09-17 16:28:58.011843 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.011850 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.011856 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.011862 | orchestrator | 2025-09-17 16:28:58.011869 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-17 16:28:58.011876 | orchestrator | Wednesday 17 September 2025 16:27:26 +0000 (0:00:00.733) 0:06:37.409 *** 2025-09-17 16:28:58.011882 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.011889 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.011895 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.011901 | orchestrator | 2025-09-17 16:28:58.011921 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-17 16:28:58.011928 | orchestrator | Wednesday 17 September 2025 16:27:27 +0000 (0:00:00.940) 0:06:38.350 *** 2025-09-17 16:28:58.011934 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:28:58.011941 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:28:58.011947 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:28:58.011954 | orchestrator | 2025-09-17 16:28:58.011960 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-17 16:28:58.011967 | orchestrator | Wednesday 17 September 2025 16:27:52 +0000 (0:00:25.316) 0:07:03.667 *** 2025-09-17 16:28:58.011973 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.011980 | orchestrator | 2025-09-17 16:28:58.011986 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-17 16:28:58.011993 | orchestrator | Wednesday 17 September 2025 16:27:52 +0000 (0:00:00.089) 0:07:03.756 *** 2025-09-17 16:28:58.011999 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.012006 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.012012 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012019 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012025 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012032 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-17 16:28:58.012038 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:28:58.012045 | orchestrator | 2025-09-17 16:28:58.012052 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-17 16:28:58.012058 | orchestrator | Wednesday 17 September 2025 16:28:14 +0000 (0:00:21.980) 0:07:25.737 *** 2025-09-17 16:28:58.012065 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.012071 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012077 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.012084 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.012093 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012100 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012107 | orchestrator | 2025-09-17 16:28:58.012113 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-17 16:28:58.012120 | orchestrator | Wednesday 17 September 2025 16:28:21 +0000 (0:00:06.825) 0:07:32.562 *** 2025-09-17 16:28:58.012126 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.012136 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.012143 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012149 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012156 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012162 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-17 16:28:58.012169 | orchestrator | 2025-09-17 16:28:58.012175 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-17 16:28:58.012186 | orchestrator | Wednesday 17 September 2025 16:28:24 +0000 (0:00:03.558) 0:07:36.120 *** 2025-09-17 16:28:58.012193 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:28:58.012199 | orchestrator | 2025-09-17 16:28:58.012206 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-17 16:28:58.012212 | orchestrator | Wednesday 17 September 2025 16:28:36 +0000 (0:00:12.078) 0:07:48.198 *** 2025-09-17 16:28:58.012219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:28:58.012225 | orchestrator | 2025-09-17 16:28:58.012232 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-17 16:28:58.012238 | orchestrator | Wednesday 17 September 2025 16:28:38 +0000 (0:00:01.257) 0:07:49.455 *** 2025-09-17 16:28:58.012245 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.012251 | orchestrator | 2025-09-17 16:28:58.012258 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-17 16:28:58.012264 | orchestrator | Wednesday 17 September 2025 16:28:39 +0000 (0:00:01.249) 0:07:50.705 *** 2025-09-17 16:28:58.012271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:28:58.012277 | orchestrator | 2025-09-17 16:28:58.012284 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-17 16:28:58.012290 | orchestrator | Wednesday 17 September 2025 16:28:50 +0000 (0:00:10.588) 0:08:01.293 *** 2025-09-17 16:28:58.012297 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:28:58.012303 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:28:58.012310 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:28:58.012316 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:28:58.012323 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:28:58.012329 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:28:58.012335 | orchestrator | 2025-09-17 16:28:58.012342 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-17 16:28:58.012349 | orchestrator | 2025-09-17 16:28:58.012355 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-17 16:28:58.012362 | orchestrator | Wednesday 17 September 2025 16:28:51 +0000 (0:00:01.652) 0:08:02.946 *** 2025-09-17 16:28:58.012368 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:28:58.012375 | orchestrator | changed: [testbed-node-1] 2025-09-17 16:28:58.012381 | orchestrator | changed: [testbed-node-2] 2025-09-17 16:28:58.012388 | orchestrator | 2025-09-17 16:28:58.012394 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-17 16:28:58.012401 | orchestrator | 2025-09-17 16:28:58.012407 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-17 16:28:58.012414 | orchestrator | Wednesday 17 September 2025 16:28:52 +0000 (0:00:00.916) 0:08:03.863 *** 2025-09-17 16:28:58.012420 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012426 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012433 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012439 | orchestrator | 2025-09-17 16:28:58.012446 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-17 16:28:58.012452 | orchestrator | 2025-09-17 16:28:58.012459 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-17 16:28:58.012465 | orchestrator | Wednesday 17 September 2025 16:28:53 +0000 (0:00:00.654) 0:08:04.517 *** 2025-09-17 16:28:58.012472 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-17 16:28:58.012479 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-17 16:28:58.012485 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012492 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-17 16:28:58.012498 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-17 16:28:58.012505 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012511 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:28:58.012522 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-17 16:28:58.012528 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-17 16:28:58.012535 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012541 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-17 16:28:58.012548 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-17 16:28:58.012554 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012561 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:28:58.012567 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-17 16:28:58.012574 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-17 16:28:58.012580 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012587 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-17 16:28:58.012593 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-17 16:28:58.012600 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012606 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:28:58.012613 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-17 16:28:58.012622 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-17 16:28:58.012629 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012635 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-17 16:28:58.012642 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-17 16:28:58.012651 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012658 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012665 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-17 16:28:58.012671 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-17 16:28:58.012678 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012684 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-17 16:28:58.012690 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-17 16:28:58.012697 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012703 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012710 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-17 16:28:58.012716 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-17 16:28:58.012722 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-17 16:28:58.012729 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-17 16:28:58.012735 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-17 16:28:58.012742 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-17 16:28:58.012748 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012755 | orchestrator | 2025-09-17 16:28:58.012761 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-17 16:28:58.012768 | orchestrator | 2025-09-17 16:28:58.012774 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-17 16:28:58.012780 | orchestrator | Wednesday 17 September 2025 16:28:54 +0000 (0:00:01.218) 0:08:05.736 *** 2025-09-17 16:28:58.012787 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-17 16:28:58.012794 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-17 16:28:58.012800 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012806 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-17 16:28:58.012813 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-17 16:28:58.012819 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012830 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-17 16:28:58.012836 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-17 16:28:58.012843 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012849 | orchestrator | 2025-09-17 16:28:58.012856 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-17 16:28:58.012863 | orchestrator | 2025-09-17 16:28:58.012869 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-17 16:28:58.012876 | orchestrator | Wednesday 17 September 2025 16:28:54 +0000 (0:00:00.505) 0:08:06.241 *** 2025-09-17 16:28:58.012882 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012889 | orchestrator | 2025-09-17 16:28:58.012895 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-17 16:28:58.012902 | orchestrator | 2025-09-17 16:28:58.012920 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-17 16:28:58.012927 | orchestrator | Wednesday 17 September 2025 16:28:55 +0000 (0:00:01.022) 0:08:07.264 *** 2025-09-17 16:28:58.012933 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:28:58.012940 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:28:58.012946 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:28:58.012953 | orchestrator | 2025-09-17 16:28:58.012959 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:28:58.012966 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:28:58.012974 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-17 16:28:58.012981 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-17 16:28:58.012987 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-17 16:28:58.012994 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-17 16:28:58.013001 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-17 16:28:58.013007 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-17 16:28:58.013014 | orchestrator | 2025-09-17 16:28:58.013020 | orchestrator | 2025-09-17 16:28:58.013027 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:28:58.013033 | orchestrator | Wednesday 17 September 2025 16:28:56 +0000 (0:00:00.420) 0:08:07.684 *** 2025-09-17 16:28:58.013040 | orchestrator | =============================================================================== 2025-09-17 16:28:58.013050 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.54s 2025-09-17 16:28:58.013057 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.42s 2025-09-17 16:28:58.013063 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.32s 2025-09-17 16:28:58.013070 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.98s 2025-09-17 16:28:58.013080 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.91s 2025-09-17 16:28:58.013086 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.36s 2025-09-17 16:28:58.013093 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.01s 2025-09-17 16:28:58.013099 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.34s 2025-09-17 16:28:58.013106 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.82s 2025-09-17 16:28:58.013117 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.51s 2025-09-17 16:28:58.013123 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.26s 2025-09-17 16:28:58.013130 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.16s 2025-09-17 16:28:58.013136 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.08s 2025-09-17 16:28:58.013142 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.98s 2025-09-17 16:28:58.013149 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.88s 2025-09-17 16:28:58.013155 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.59s 2025-09-17 16:28:58.013162 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.76s 2025-09-17 16:28:58.013169 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.92s 2025-09-17 16:28:58.013175 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 6.83s 2025-09-17 16:28:58.013182 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.72s 2025-09-17 16:28:58.013188 | orchestrator | 2025-09-17 16:28:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:01.041166 | orchestrator | 2025-09-17 16:29:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:04.078148 | orchestrator | 2025-09-17 16:29:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:07.110544 | orchestrator | 2025-09-17 16:29:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:10.153369 | orchestrator | 2025-09-17 16:29:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:13.193964 | orchestrator | 2025-09-17 16:29:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:16.236074 | orchestrator | 2025-09-17 16:29:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:19.279053 | orchestrator | 2025-09-17 16:29:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:22.320334 | orchestrator | 2025-09-17 16:29:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:25.356147 | orchestrator | 2025-09-17 16:29:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:28.395725 | orchestrator | 2025-09-17 16:29:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:31.434725 | orchestrator | 2025-09-17 16:29:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:34.474889 | orchestrator | 2025-09-17 16:29:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:37.520978 | orchestrator | 2025-09-17 16:29:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:40.558286 | orchestrator | 2025-09-17 16:29:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:43.598398 | orchestrator | 2025-09-17 16:29:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:46.637758 | orchestrator | 2025-09-17 16:29:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:49.681683 | orchestrator | 2025-09-17 16:29:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:52.723423 | orchestrator | 2025-09-17 16:29:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:55.761876 | orchestrator | 2025-09-17 16:29:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-17 16:29:58.803630 | orchestrator | 2025-09-17 16:29:58.974898 | orchestrator | 2025-09-17 16:29:58.979740 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Sep 17 16:29:58 UTC 2025 2025-09-17 16:29:58.979771 | orchestrator | 2025-09-17 16:29:59.359804 | orchestrator | ok: Runtime: 0:33:38.169543 2025-09-17 16:29:59.618964 | 2025-09-17 16:29:59.619143 | TASK [Bootstrap services] 2025-09-17 16:30:00.262137 | orchestrator | 2025-09-17 16:30:00.262292 | orchestrator | # BOOTSTRAP 2025-09-17 16:30:00.262314 | orchestrator | 2025-09-17 16:30:00.262329 | orchestrator | + set -e 2025-09-17 16:30:00.262342 | orchestrator | + echo 2025-09-17 16:30:00.262356 | orchestrator | + echo '# BOOTSTRAP' 2025-09-17 16:30:00.262373 | orchestrator | + echo 2025-09-17 16:30:00.262418 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-17 16:30:00.270328 | orchestrator | + set -e 2025-09-17 16:30:00.270934 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-17 16:30:03.567238 | orchestrator | 2025-09-17 16:30:03 | INFO  | It takes a moment until task 1dec6ff3-0822-4016-b4eb-93364d791b34 (flavor-manager) has been started and output is visible here. 2025-09-17 16:30:11.755233 | orchestrator | 2025-09-17 16:30:07 | INFO  | Flavor SCS-1V-4 created 2025-09-17 16:30:11.755350 | orchestrator | 2025-09-17 16:30:07 | INFO  | Flavor SCS-2V-8 created 2025-09-17 16:30:11.755368 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-4V-16 created 2025-09-17 16:30:11.755381 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-8V-32 created 2025-09-17 16:30:11.755393 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-1V-2 created 2025-09-17 16:30:11.755404 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-2V-4 created 2025-09-17 16:30:11.755416 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-4V-8 created 2025-09-17 16:30:11.755428 | orchestrator | 2025-09-17 16:30:08 | INFO  | Flavor SCS-8V-16 created 2025-09-17 16:30:11.755449 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-16V-32 created 2025-09-17 16:30:11.755461 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-1V-8 created 2025-09-17 16:30:11.755472 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-2V-16 created 2025-09-17 16:30:11.755483 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-4V-32 created 2025-09-17 16:30:11.755494 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-1L-1 created 2025-09-17 16:30:11.755505 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-2V-4-20s created 2025-09-17 16:30:11.755515 | orchestrator | 2025-09-17 16:30:09 | INFO  | Flavor SCS-4V-16-100s created 2025-09-17 16:30:11.755526 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-1V-4-10 created 2025-09-17 16:30:11.755537 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-2V-8-20 created 2025-09-17 16:30:11.755548 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-4V-16-50 created 2025-09-17 16:30:11.755558 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-8V-32-100 created 2025-09-17 16:30:11.755569 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-1V-2-5 created 2025-09-17 16:30:11.755580 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-2V-4-10 created 2025-09-17 16:30:11.755591 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-4V-8-20 created 2025-09-17 16:30:11.755603 | orchestrator | 2025-09-17 16:30:10 | INFO  | Flavor SCS-8V-16-50 created 2025-09-17 16:30:11.755614 | orchestrator | 2025-09-17 16:30:11 | INFO  | Flavor SCS-16V-32-100 created 2025-09-17 16:30:11.755625 | orchestrator | 2025-09-17 16:30:11 | INFO  | Flavor SCS-1V-8-20 created 2025-09-17 16:30:11.755636 | orchestrator | 2025-09-17 16:30:11 | INFO  | Flavor SCS-2V-16-50 created 2025-09-17 16:30:11.755646 | orchestrator | 2025-09-17 16:30:11 | INFO  | Flavor SCS-4V-32-100 created 2025-09-17 16:30:11.755657 | orchestrator | 2025-09-17 16:30:11 | INFO  | Flavor SCS-1L-1-5 created 2025-09-17 16:30:13.517082 | orchestrator | 2025-09-17 16:30:13 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-17 16:30:23.767282 | orchestrator | 2025-09-17 16:30:23 | INFO  | Task ef9b39e9-298f-409b-8d65-4ac7862ed3e6 (bootstrap-basic) was prepared for execution. 2025-09-17 16:30:23.767432 | orchestrator | 2025-09-17 16:30:23 | INFO  | It takes a moment until task ef9b39e9-298f-409b-8d65-4ac7862ed3e6 (bootstrap-basic) has been started and output is visible here. 2025-09-17 16:31:22.762647 | orchestrator | 2025-09-17 16:31:22.762767 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-17 16:31:22.762785 | orchestrator | 2025-09-17 16:31:22.762799 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-17 16:31:22.762815 | orchestrator | Wednesday 17 September 2025 16:30:27 +0000 (0:00:00.070) 0:00:00.070 *** 2025-09-17 16:31:22.762829 | orchestrator | ok: [localhost] 2025-09-17 16:31:22.762844 | orchestrator | 2025-09-17 16:31:22.762858 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-17 16:31:22.762874 | orchestrator | Wednesday 17 September 2025 16:30:29 +0000 (0:00:01.721) 0:00:01.791 *** 2025-09-17 16:31:22.762888 | orchestrator | ok: [localhost] 2025-09-17 16:31:22.762903 | orchestrator | 2025-09-17 16:31:22.762916 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-17 16:31:22.762930 | orchestrator | Wednesday 17 September 2025 16:30:37 +0000 (0:00:07.979) 0:00:09.771 *** 2025-09-17 16:31:22.763026 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763042 | orchestrator | 2025-09-17 16:31:22.763057 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-17 16:31:22.763071 | orchestrator | Wednesday 17 September 2025 16:30:44 +0000 (0:00:06.775) 0:00:16.547 *** 2025-09-17 16:31:22.763084 | orchestrator | ok: [localhost] 2025-09-17 16:31:22.763098 | orchestrator | 2025-09-17 16:31:22.763114 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-17 16:31:22.763128 | orchestrator | Wednesday 17 September 2025 16:30:51 +0000 (0:00:06.857) 0:00:23.405 *** 2025-09-17 16:31:22.763140 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763155 | orchestrator | 2025-09-17 16:31:22.763167 | orchestrator | TASK [Create public network] *************************************************** 2025-09-17 16:31:22.763181 | orchestrator | Wednesday 17 September 2025 16:30:57 +0000 (0:00:06.591) 0:00:29.996 *** 2025-09-17 16:31:22.763196 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763210 | orchestrator | 2025-09-17 16:31:22.763224 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-17 16:31:22.763239 | orchestrator | Wednesday 17 September 2025 16:31:04 +0000 (0:00:06.591) 0:00:36.588 *** 2025-09-17 16:31:22.763253 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763268 | orchestrator | 2025-09-17 16:31:22.763294 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-17 16:31:22.763309 | orchestrator | Wednesday 17 September 2025 16:31:10 +0000 (0:00:06.236) 0:00:42.825 *** 2025-09-17 16:31:22.763323 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763337 | orchestrator | 2025-09-17 16:31:22.763352 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-17 16:31:22.763366 | orchestrator | Wednesday 17 September 2025 16:31:14 +0000 (0:00:04.317) 0:00:47.143 *** 2025-09-17 16:31:22.763381 | orchestrator | changed: [localhost] 2025-09-17 16:31:22.763395 | orchestrator | 2025-09-17 16:31:22.763410 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-17 16:31:22.763426 | orchestrator | Wednesday 17 September 2025 16:31:19 +0000 (0:00:04.358) 0:00:51.501 *** 2025-09-17 16:31:22.763441 | orchestrator | ok: [localhost] 2025-09-17 16:31:22.763456 | orchestrator | 2025-09-17 16:31:22.763470 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:31:22.763485 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:31:22.763500 | orchestrator | 2025-09-17 16:31:22.763515 | orchestrator | 2025-09-17 16:31:22.763531 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:31:22.763545 | orchestrator | Wednesday 17 September 2025 16:31:22 +0000 (0:00:03.436) 0:00:54.938 *** 2025-09-17 16:31:22.763591 | orchestrator | =============================================================================== 2025-09-17 16:31:22.763605 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.98s 2025-09-17 16:31:22.763619 | orchestrator | Get volume type local --------------------------------------------------- 6.86s 2025-09-17 16:31:22.763633 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.78s 2025-09-17 16:31:22.763646 | orchestrator | Create public network --------------------------------------------------- 6.59s 2025-09-17 16:31:22.763659 | orchestrator | Create volume type local ------------------------------------------------ 6.59s 2025-09-17 16:31:22.763675 | orchestrator | Set public network to default ------------------------------------------- 6.24s 2025-09-17 16:31:22.763689 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.36s 2025-09-17 16:31:22.763703 | orchestrator | Create public subnet ---------------------------------------------------- 4.32s 2025-09-17 16:31:22.763716 | orchestrator | Create manager role ----------------------------------------------------- 3.44s 2025-09-17 16:31:22.763730 | orchestrator | Gathering Facts --------------------------------------------------------- 1.72s 2025-09-17 16:31:24.913696 | orchestrator | 2025-09-17 16:31:24 | INFO  | It takes a moment until task 5865edb8-cfa1-49db-a093-ac15712703ec (image-manager) has been started and output is visible here. 2025-09-17 16:32:14.743778 | orchestrator | 2025-09-17 16:31:28 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-17 16:32:14.743899 | orchestrator | 2025-09-17 16:31:28 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-17 16:32:14.743920 | orchestrator | 2025-09-17 16:31:28 | INFO  | Importing image Cirros 0.6.2 2025-09-17 16:32:14.743933 | orchestrator | 2025-09-17 16:31:28 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-17 16:32:14.743945 | orchestrator | 2025-09-17 16:31:30 | INFO  | Waiting for image to leave queued state... 2025-09-17 16:32:14.744021 | orchestrator | 2025-09-17 16:31:32 | INFO  | Waiting for import to complete... 2025-09-17 16:32:14.744033 | orchestrator | 2025-09-17 16:31:42 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-17 16:32:14.744045 | orchestrator | 2025-09-17 16:31:42 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-17 16:32:14.744056 | orchestrator | 2025-09-17 16:31:42 | INFO  | Setting internal_version = 0.6.2 2025-09-17 16:32:14.744067 | orchestrator | 2025-09-17 16:31:42 | INFO  | Setting image_original_user = cirros 2025-09-17 16:32:14.744079 | orchestrator | 2025-09-17 16:31:42 | INFO  | Adding tag os:cirros 2025-09-17 16:32:14.744090 | orchestrator | 2025-09-17 16:31:43 | INFO  | Setting property architecture: x86_64 2025-09-17 16:32:14.744101 | orchestrator | 2025-09-17 16:31:43 | INFO  | Setting property hw_disk_bus: scsi 2025-09-17 16:32:14.744112 | orchestrator | 2025-09-17 16:31:43 | INFO  | Setting property hw_rng_model: virtio 2025-09-17 16:32:14.744123 | orchestrator | 2025-09-17 16:31:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-17 16:32:14.744134 | orchestrator | 2025-09-17 16:31:43 | INFO  | Setting property hw_watchdog_action: reset 2025-09-17 16:32:14.744145 | orchestrator | 2025-09-17 16:31:44 | INFO  | Setting property hypervisor_type: qemu 2025-09-17 16:32:14.744155 | orchestrator | 2025-09-17 16:31:44 | INFO  | Setting property os_distro: cirros 2025-09-17 16:32:14.744166 | orchestrator | 2025-09-17 16:31:44 | INFO  | Setting property replace_frequency: never 2025-09-17 16:32:14.744177 | orchestrator | 2025-09-17 16:31:44 | INFO  | Setting property uuid_validity: none 2025-09-17 16:32:14.744187 | orchestrator | 2025-09-17 16:31:45 | INFO  | Setting property provided_until: none 2025-09-17 16:32:14.744223 | orchestrator | 2025-09-17 16:31:45 | INFO  | Setting property image_description: Cirros 2025-09-17 16:32:14.744243 | orchestrator | 2025-09-17 16:31:45 | INFO  | Setting property image_name: Cirros 2025-09-17 16:32:14.744254 | orchestrator | 2025-09-17 16:31:45 | INFO  | Setting property internal_version: 0.6.2 2025-09-17 16:32:14.744270 | orchestrator | 2025-09-17 16:31:45 | INFO  | Setting property image_original_user: cirros 2025-09-17 16:32:14.744281 | orchestrator | 2025-09-17 16:31:46 | INFO  | Setting property os_version: 0.6.2 2025-09-17 16:32:14.744294 | orchestrator | 2025-09-17 16:31:46 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-17 16:32:14.744308 | orchestrator | 2025-09-17 16:31:46 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-17 16:32:14.744320 | orchestrator | 2025-09-17 16:31:46 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-17 16:32:14.744333 | orchestrator | 2025-09-17 16:31:46 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-17 16:32:14.744345 | orchestrator | 2025-09-17 16:31:46 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-17 16:32:14.744357 | orchestrator | 2025-09-17 16:31:47 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-17 16:32:14.744369 | orchestrator | 2025-09-17 16:31:47 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-17 16:32:14.744381 | orchestrator | 2025-09-17 16:31:47 | INFO  | Importing image Cirros 0.6.3 2025-09-17 16:32:14.744393 | orchestrator | 2025-09-17 16:31:47 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-17 16:32:14.744406 | orchestrator | 2025-09-17 16:31:47 | INFO  | Waiting for image to leave queued state... 2025-09-17 16:32:14.744418 | orchestrator | 2025-09-17 16:31:49 | INFO  | Waiting for import to complete... 2025-09-17 16:32:14.744430 | orchestrator | 2025-09-17 16:31:59 | INFO  | Waiting for import to complete... 2025-09-17 16:32:14.744461 | orchestrator | 2025-09-17 16:32:10 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-17 16:32:14.744474 | orchestrator | 2025-09-17 16:32:10 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-17 16:32:14.744487 | orchestrator | 2025-09-17 16:32:10 | INFO  | Setting internal_version = 0.6.3 2025-09-17 16:32:14.744499 | orchestrator | 2025-09-17 16:32:10 | INFO  | Setting image_original_user = cirros 2025-09-17 16:32:14.744511 | orchestrator | 2025-09-17 16:32:10 | INFO  | Adding tag os:cirros 2025-09-17 16:32:14.744523 | orchestrator | 2025-09-17 16:32:10 | INFO  | Setting property architecture: x86_64 2025-09-17 16:32:14.744535 | orchestrator | 2025-09-17 16:32:10 | INFO  | Setting property hw_disk_bus: scsi 2025-09-17 16:32:14.744547 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property hw_rng_model: virtio 2025-09-17 16:32:14.744559 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-17 16:32:14.744572 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property hw_watchdog_action: reset 2025-09-17 16:32:14.744584 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property hypervisor_type: qemu 2025-09-17 16:32:14.744596 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property os_distro: cirros 2025-09-17 16:32:14.744608 | orchestrator | 2025-09-17 16:32:11 | INFO  | Setting property replace_frequency: never 2025-09-17 16:32:14.744629 | orchestrator | 2025-09-17 16:32:12 | INFO  | Setting property uuid_validity: none 2025-09-17 16:32:14.744648 | orchestrator | 2025-09-17 16:32:12 | INFO  | Setting property provided_until: none 2025-09-17 16:32:14.744666 | orchestrator | 2025-09-17 16:32:12 | INFO  | Setting property image_description: Cirros 2025-09-17 16:32:14.744683 | orchestrator | 2025-09-17 16:32:12 | INFO  | Setting property image_name: Cirros 2025-09-17 16:32:14.744700 | orchestrator | 2025-09-17 16:32:12 | INFO  | Setting property internal_version: 0.6.3 2025-09-17 16:32:14.744717 | orchestrator | 2025-09-17 16:32:13 | INFO  | Setting property image_original_user: cirros 2025-09-17 16:32:14.744735 | orchestrator | 2025-09-17 16:32:13 | INFO  | Setting property os_version: 0.6.3 2025-09-17 16:32:14.744754 | orchestrator | 2025-09-17 16:32:13 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-17 16:32:14.744774 | orchestrator | 2025-09-17 16:32:13 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-17 16:32:14.744787 | orchestrator | 2025-09-17 16:32:13 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-17 16:32:14.744804 | orchestrator | 2025-09-17 16:32:13 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-17 16:32:14.744815 | orchestrator | 2025-09-17 16:32:13 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-17 16:32:15.075639 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-17 16:32:17.083588 | orchestrator | 2025-09-17 16:32:17 | INFO  | date: 2025-09-17 2025-09-17 16:32:17.083702 | orchestrator | 2025-09-17 16:32:17 | INFO  | image: octavia-amphora-haproxy-2024.2.20250917.qcow2 2025-09-17 16:32:17.084023 | orchestrator | 2025-09-17 16:32:17 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250917.qcow2 2025-09-17 16:32:17.084146 | orchestrator | 2025-09-17 16:32:17 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250917.qcow2.CHECKSUM 2025-09-17 16:32:17.120353 | orchestrator | 2025-09-17 16:32:17 | INFO  | checksum: 09a35b45f74dc7c4c3be84179f84a8179fe00a0c8adab577e4eac807999052b5 2025-09-17 16:32:17.196318 | orchestrator | 2025-09-17 16:32:17 | INFO  | It takes a moment until task 33906651-3036-4d0e-b252-06321bfc90cb (image-manager) has been started and output is visible here. 2025-09-17 16:33:18.346900 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-17 16:33:18.347039 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-09-17 16:33:18.347058 | orchestrator | 2025-09-17 16:32:19 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-17' 2025-09-17 16:33:18.347072 | orchestrator | 2025-09-17 16:32:19 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250917.qcow2: 200 2025-09-17 16:33:18.347083 | orchestrator | 2025-09-17 16:32:19 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-17 2025-09-17 16:33:18.347092 | orchestrator | 2025-09-17 16:32:19 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250917.qcow2 2025-09-17 16:33:18.347123 | orchestrator | 2025-09-17 16:32:20 | INFO  | Waiting for image to leave queued state... 2025-09-17 16:33:18.347133 | orchestrator | 2025-09-17 16:32:22 | INFO  | Waiting for import to complete... 2025-09-17 16:33:18.347143 | orchestrator | 2025-09-17 16:32:33 | INFO  | Waiting for import to complete... 2025-09-17 16:33:18.347152 | orchestrator | 2025-09-17 16:32:43 | INFO  | Waiting for import to complete... 2025-09-17 16:33:18.347161 | orchestrator | 2025-09-17 16:32:53 | INFO  | Waiting for import to complete... 2025-09-17 16:33:18.347169 | orchestrator | 2025-09-17 16:33:03 | INFO  | Waiting for import to complete... 2025-09-17 16:33:18.347178 | orchestrator | 2025-09-17 16:33:13 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-17' successfully completed, reloading images 2025-09-17 16:33:18.347187 | orchestrator | 2025-09-17 16:33:13 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-17' 2025-09-17 16:33:18.347197 | orchestrator | 2025-09-17 16:33:13 | INFO  | Setting internal_version = 2025-09-17 2025-09-17 16:33:18.347205 | orchestrator | 2025-09-17 16:33:13 | INFO  | Setting image_original_user = ubuntu 2025-09-17 16:33:18.347214 | orchestrator | 2025-09-17 16:33:13 | INFO  | Adding tag amphora 2025-09-17 16:33:18.347223 | orchestrator | 2025-09-17 16:33:14 | INFO  | Adding tag os:ubuntu 2025-09-17 16:33:18.347240 | orchestrator | 2025-09-17 16:33:14 | INFO  | Setting property architecture: x86_64 2025-09-17 16:33:18.347249 | orchestrator | 2025-09-17 16:33:14 | INFO  | Setting property hw_disk_bus: scsi 2025-09-17 16:33:18.347257 | orchestrator | 2025-09-17 16:33:14 | INFO  | Setting property hw_rng_model: virtio 2025-09-17 16:33:18.347266 | orchestrator | 2025-09-17 16:33:14 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-17 16:33:18.347275 | orchestrator | 2025-09-17 16:33:15 | INFO  | Setting property hw_watchdog_action: reset 2025-09-17 16:33:18.347284 | orchestrator | 2025-09-17 16:33:15 | INFO  | Setting property hypervisor_type: qemu 2025-09-17 16:33:18.347292 | orchestrator | 2025-09-17 16:33:15 | INFO  | Setting property os_distro: ubuntu 2025-09-17 16:33:18.347301 | orchestrator | 2025-09-17 16:33:15 | INFO  | Setting property replace_frequency: quarterly 2025-09-17 16:33:18.347309 | orchestrator | 2025-09-17 16:33:15 | INFO  | Setting property uuid_validity: last-1 2025-09-17 16:33:18.347318 | orchestrator | 2025-09-17 16:33:16 | INFO  | Setting property provided_until: none 2025-09-17 16:33:18.347327 | orchestrator | 2025-09-17 16:33:16 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-17 16:33:18.347336 | orchestrator | 2025-09-17 16:33:16 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-17 16:33:18.347344 | orchestrator | 2025-09-17 16:33:16 | INFO  | Setting property internal_version: 2025-09-17 2025-09-17 16:33:18.347353 | orchestrator | 2025-09-17 16:33:17 | INFO  | Setting property image_original_user: ubuntu 2025-09-17 16:33:18.347362 | orchestrator | 2025-09-17 16:33:17 | INFO  | Setting property os_version: 2025-09-17 2025-09-17 16:33:18.347371 | orchestrator | 2025-09-17 16:33:17 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250917.qcow2 2025-09-17 16:33:18.347396 | orchestrator | 2025-09-17 16:33:17 | INFO  | Setting property image_build_date: 2025-09-17 2025-09-17 16:33:18.347405 | orchestrator | 2025-09-17 16:33:17 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-17' 2025-09-17 16:33:18.347420 | orchestrator | 2025-09-17 16:33:17 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-17' 2025-09-17 16:33:18.347429 | orchestrator | 2025-09-17 16:33:18 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-17 16:33:18.347438 | orchestrator | 2025-09-17 16:33:18 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-17 16:33:18.347451 | orchestrator | 2025-09-17 16:33:18 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-17 16:33:18.347461 | orchestrator | 2025-09-17 16:33:18 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-17 16:33:18.787099 | orchestrator | ok: Runtime: 0:03:18.711862 2025-09-17 16:33:18.812357 | 2025-09-17 16:33:18.812528 | TASK [Run checks] 2025-09-17 16:33:19.535171 | orchestrator | + set -e 2025-09-17 16:33:19.535353 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 16:33:19.535378 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 16:33:19.535399 | orchestrator | ++ INTERACTIVE=false 2025-09-17 16:33:19.535413 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 16:33:19.535425 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 16:33:19.535439 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-17 16:33:19.536197 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-17 16:33:19.542136 | orchestrator | 2025-09-17 16:33:19.542168 | orchestrator | # CHECK 2025-09-17 16:33:19.542180 | orchestrator | 2025-09-17 16:33:19.542193 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 16:33:19.542208 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 16:33:19.542219 | orchestrator | + echo 2025-09-17 16:33:19.542230 | orchestrator | + echo '# CHECK' 2025-09-17 16:33:19.542241 | orchestrator | + echo 2025-09-17 16:33:19.542256 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-17 16:33:19.543121 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-17 16:33:19.604752 | orchestrator | 2025-09-17 16:33:19.604816 | orchestrator | ## Containers @ testbed-manager 2025-09-17 16:33:19.604826 | orchestrator | 2025-09-17 16:33:19.604835 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-17 16:33:19.604842 | orchestrator | + echo 2025-09-17 16:33:19.604849 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-17 16:33:19.604857 | orchestrator | + echo 2025-09-17 16:33:19.604865 | orchestrator | + osism container testbed-manager ps 2025-09-17 16:33:21.750139 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-17 16:33:21.750261 | orchestrator | b62e47122810 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-09-17 16:33:21.750293 | orchestrator | 97f7168fac1a registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-09-17 16:33:21.750311 | orchestrator | 9b347e447cdb registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-17 16:33:21.750322 | orchestrator | ffbc25219a1f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-17 16:33:21.750332 | orchestrator | 5024eae0c075 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-09-17 16:33:21.750357 | orchestrator | e2d203b535b9 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-09-17 16:33:21.750372 | orchestrator | d5094863104f registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-17 16:33:21.750382 | orchestrator | ef63b868ac18 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-17 16:33:21.750392 | orchestrator | c1557ff43e35 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-17 16:33:21.750425 | orchestrator | 2d7b8c7494a9 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-09-17 16:33:21.750435 | orchestrator | 57e2ea9e7f26 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-09-17 16:33:21.750446 | orchestrator | a1c0b6fec8cc registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-09-17 16:33:21.750456 | orchestrator | 20e3dc2c71b8 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-17 16:33:21.750471 | orchestrator | fbe0ef8d5b5a registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 57 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-09-17 16:33:21.750711 | orchestrator | 0110a91d2704 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-09-17 16:33:21.750728 | orchestrator | 505e1cb73156 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-09-17 16:33:21.750738 | orchestrator | 4212f3040c42 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-ansible 2025-09-17 16:33:21.750748 | orchestrator | 08e7621f95a3 registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-09-17 16:33:21.750758 | orchestrator | 4aaa7981ad0f registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-17 16:33:21.750768 | orchestrator | 05b5b06f8dd9 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-09-17 16:33:21.750777 | orchestrator | 5a158d6eec4d registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 57 minutes ago Up 38 minutes (healthy) osismclient 2025-09-17 16:33:21.750787 | orchestrator | 31eb37076cf1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-09-17 16:33:21.750797 | orchestrator | 6b0351e8749e registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-09-17 16:33:21.750815 | orchestrator | 40c8a67b9ff9 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 38 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-17 16:33:21.750825 | orchestrator | 5ba0338e5831 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-17 16:33:21.750835 | orchestrator | b5123aa58981 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-09-17 16:33:21.750845 | orchestrator | 027a74bd89a2 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-17 16:33:21.750855 | orchestrator | d484b7b7ac59 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-09-17 16:33:21.750864 | orchestrator | b08b324f5058 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-17 16:33:21.994013 | orchestrator | 2025-09-17 16:33:21.994148 | orchestrator | ## Images @ testbed-manager 2025-09-17 16:33:21.994161 | orchestrator | 2025-09-17 16:33:21.994174 | orchestrator | + echo 2025-09-17 16:33:21.994186 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-17 16:33:21.994197 | orchestrator | + echo 2025-09-17 16:33:21.994209 | orchestrator | + osism container testbed-manager images 2025-09-17 16:33:24.088508 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-17 16:33:24.088591 | orchestrator | registry.osism.tech/osism/osism-frontend latest 047f7feec3e3 2 hours ago 236MB 2025-09-17 16:33:24.088598 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 5f07ca023aca 13 hours ago 240MB 2025-09-17 16:33:24.088622 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 5 weeks ago 11.5MB 2025-09-17 16:33:24.088627 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 2 months ago 571MB 2025-09-17 16:33:24.088631 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-17 16:33:24.088636 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-17 16:33:24.088641 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-17 16:33:24.088645 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 2 months ago 891MB 2025-09-17 16:33:24.088649 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 2 months ago 360MB 2025-09-17 16:33:24.088654 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-17 16:33:24.088658 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 2 months ago 456MB 2025-09-17 16:33:24.088677 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-17 16:33:24.088682 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 2 months ago 575MB 2025-09-17 16:33:24.088686 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 2 months ago 535MB 2025-09-17 16:33:24.088691 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 2 months ago 308MB 2025-09-17 16:33:24.088695 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 2 months ago 1.21GB 2025-09-17 16:33:24.088699 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 months ago 310MB 2025-09-17 16:33:24.088704 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-17 16:33:24.088708 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-09-17 16:33:24.088712 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 3 months ago 329MB 2025-09-17 16:33:24.088717 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 4 months ago 453MB 2025-09-17 16:33:24.088721 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-17 16:33:24.088725 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 12 months ago 300MB 2025-09-17 16:33:24.088730 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-17 16:33:24.342854 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-17 16:33:24.343056 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-17 16:33:24.390329 | orchestrator | 2025-09-17 16:33:24.390423 | orchestrator | ## Containers @ testbed-node-0 2025-09-17 16:33:24.390439 | orchestrator | 2025-09-17 16:33:24.390452 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-17 16:33:24.390463 | orchestrator | + echo 2025-09-17 16:33:24.390475 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-17 16:33:24.390487 | orchestrator | + echo 2025-09-17 16:33:24.390498 | orchestrator | + osism container testbed-node-0 ps 2025-09-17 16:33:26.642919 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-17 16:33:26.643045 | orchestrator | 71ffe8310830 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-17 16:33:26.643065 | orchestrator | abfca056d3e2 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-17 16:33:26.643077 | orchestrator | 217959cab9a0 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-17 16:33:26.643089 | orchestrator | f875f159cd08 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-17 16:33:26.643100 | orchestrator | 227c5fc59006 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-09-17 16:33:26.643140 | orchestrator | c19f5d4fac31 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-17 16:33:26.643153 | orchestrator | 9eb8a9f5a240 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-17 16:33:26.643190 | orchestrator | 2f92e0781537 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-17 16:33:26.643203 | orchestrator | e8c74177c856 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-09-17 16:33:26.643216 | orchestrator | c11f9b352616 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-17 16:33:26.643227 | orchestrator | 6da01fdac39f registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-09-17 16:33:26.643254 | orchestrator | 9b27998d9063 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-17 16:33:26.643267 | orchestrator | 927755f6c5e3 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-17 16:33:26.643278 | orchestrator | c554d8ab261f registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-17 16:33:26.643290 | orchestrator | 8a28fa8f768e registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-17 16:33:26.643302 | orchestrator | 88001be587ef registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-17 16:33:26.643313 | orchestrator | 9b7f8e8cc0c0 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-17 16:33:26.643325 | orchestrator | 87585c10a87b registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-17 16:33:26.643336 | orchestrator | 2579fedaf3d4 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-17 16:33:26.643348 | orchestrator | ed17f3d70716 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2025-09-17 16:33:26.643359 | orchestrator | c572829edf10 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-17 16:33:26.643371 | orchestrator | dcc58bf4689c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-17 16:33:26.643382 | orchestrator | e6d7bcbe6b77 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-17 16:33:26.643394 | orchestrator | babb83eb5b99 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-17 16:33:26.643410 | orchestrator | 495068dacc4a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-17 16:33:26.643430 | orchestrator | a778b0773caa registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-17 16:33:26.643441 | orchestrator | a17b0f2865f4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-09-17 16:33:26.643453 | orchestrator | 9c55c93452fd registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-17 16:33:26.643468 | orchestrator | edcf5c839090 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-17 16:33:26.643479 | orchestrator | 80568de98468 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-17 16:33:26.643491 | orchestrator | 6ff7600c1e24 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-17 16:33:26.643502 | orchestrator | b2c9d4860c75 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-17 16:33:26.643524 | orchestrator | 3ab8ba9c0e18 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-17 16:33:26.643536 | orchestrator | cded465fb4b4 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-17 16:33:26.643548 | orchestrator | bdba28df2202 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-09-17 16:33:26.643564 | orchestrator | a2ae2318fafb registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-17 16:33:26.643576 | orchestrator | 477f4d9978a9 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-17 16:33:26.643587 | orchestrator | d12401ec7513 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-17 16:33:26.643599 | orchestrator | eeea645c3c76 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-09-17 16:33:26.643610 | orchestrator | 225a7deccb6b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-17 16:33:26.643622 | orchestrator | 910a8a2050fb registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-17 16:33:26.643633 | orchestrator | d32dded49807 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-17 16:33:26.643645 | orchestrator | 759bfa46d343 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-09-17 16:33:26.643663 | orchestrator | ac9fd340feff registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-17 16:33:26.643675 | orchestrator | f1ef5d5caf45 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-17 16:33:26.643686 | orchestrator | f9cb48e74a64 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-17 16:33:26.643698 | orchestrator | b7d87bda203c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-17 16:33:26.643709 | orchestrator | 576b2fd650c1 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-17 16:33:26.643721 | orchestrator | 71f51ab6222d registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-17 16:33:26.643733 | orchestrator | 2caf09494cde registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-17 16:33:26.643744 | orchestrator | c63f2b60c6e4 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-17 16:33:26.643756 | orchestrator | 5fd46903f8e2 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-17 16:33:26.812615 | orchestrator | 2025-09-17 16:33:26.812709 | orchestrator | ## Images @ testbed-node-0 2025-09-17 16:33:26.812732 | orchestrator | 2025-09-17 16:33:26.812752 | orchestrator | + echo 2025-09-17 16:33:26.812773 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-17 16:33:26.812793 | orchestrator | + echo 2025-09-17 16:33:26.812811 | orchestrator | + osism container testbed-node-0 images 2025-09-17 16:33:28.811453 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-17 16:33:28.811539 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-17 16:33:28.811554 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-17 16:33:28.811566 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-17 16:33:28.811577 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-17 16:33:28.811588 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-17 16:33:28.811599 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-17 16:33:28.811610 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-17 16:33:28.811621 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-17 16:33:28.811633 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-17 16:33:28.811644 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-17 16:33:28.811655 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-17 16:33:28.811691 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-17 16:33:28.811704 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-17 16:33:28.811715 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-17 16:33:28.811726 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-17 16:33:28.811752 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-17 16:33:28.811765 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-17 16:33:28.811776 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-17 16:33:28.811787 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-17 16:33:28.811799 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-17 16:33:28.811810 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-17 16:33:28.811821 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-17 16:33:28.811833 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-17 16:33:28.811844 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-17 16:33:28.811855 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-17 16:33:28.811866 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-17 16:33:28.811878 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 2 months ago 1.04GB 2025-09-17 16:33:28.811889 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 2 months ago 1.04GB 2025-09-17 16:33:28.811900 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-17 16:33:28.811911 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-17 16:33:28.811923 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-17 16:33:28.811989 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-17 16:33:28.812003 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-17 16:33:28.812014 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-17 16:33:28.812025 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-17 16:33:28.812035 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-17 16:33:28.812046 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-17 16:33:28.812065 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-17 16:33:28.812076 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-17 16:33:28.812087 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-17 16:33:28.812103 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-17 16:33:28.812114 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-17 16:33:28.812125 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-17 16:33:28.812136 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-17 16:33:28.812146 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-17 16:33:28.812157 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-17 16:33:28.812168 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-17 16:33:28.812178 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-17 16:33:28.812189 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-17 16:33:28.812199 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-17 16:33:28.812210 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-17 16:33:28.812220 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-17 16:33:28.812231 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 2 months ago 1.11GB 2025-09-17 16:33:28.812242 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 2 months ago 1.11GB 2025-09-17 16:33:28.812252 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-17 16:33:28.812263 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-17 16:33:28.812274 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-17 16:33:28.812285 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-17 16:33:28.812295 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 2 months ago 1.04GB 2025-09-17 16:33:28.812306 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 2 months ago 1.04GB 2025-09-17 16:33:28.812316 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 2 months ago 1.04GB 2025-09-17 16:33:28.812327 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 2 months ago 1.04GB 2025-09-17 16:33:28.812338 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-17 16:33:28.973625 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-17 16:33:28.974106 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-17 16:33:29.020675 | orchestrator | 2025-09-17 16:33:29.020732 | orchestrator | ## Containers @ testbed-node-1 2025-09-17 16:33:29.020745 | orchestrator | 2025-09-17 16:33:29.020757 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-17 16:33:29.020768 | orchestrator | + echo 2025-09-17 16:33:29.020780 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-17 16:33:29.020791 | orchestrator | + echo 2025-09-17 16:33:29.020802 | orchestrator | + osism container testbed-node-1 ps 2025-09-17 16:33:31.092076 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-17 16:33:31.092193 | orchestrator | 232cda571fb1 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-17 16:33:31.092220 | orchestrator | 62873824679a registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-17 16:33:31.092233 | orchestrator | 8cd39b2804f5 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-17 16:33:31.092245 | orchestrator | 8e95e000d186 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-17 16:33:31.092257 | orchestrator | ffb07ca649f9 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-17 16:33:31.092268 | orchestrator | 68a2e8c65642 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-17 16:33:31.092279 | orchestrator | a43494f20cc2 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-17 16:33:31.092290 | orchestrator | ee1de2b44239 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-17 16:33:31.092302 | orchestrator | 15e5b1f2ccb0 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-09-17 16:33:31.092315 | orchestrator | 1c6647df249b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-17 16:33:31.092326 | orchestrator | 1665898fa3b8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-09-17 16:33:31.092338 | orchestrator | 6ac7093c5f98 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-17 16:33:31.092349 | orchestrator | 422a76d7be99 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-17 16:33:31.092360 | orchestrator | f39d12657cbb registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-17 16:33:31.092371 | orchestrator | aae73ac9c8ba registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-17 16:33:31.092401 | orchestrator | 708eba00e3aa registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2025-09-17 16:33:31.092414 | orchestrator | a105a6abfe6d registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-17 16:33:31.092425 | orchestrator | 7d00169c867a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-17 16:33:31.092436 | orchestrator | 2c8bf5ebf227 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-17 16:33:31.092467 | orchestrator | 603f71d0e663 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2025-09-17 16:33:31.092479 | orchestrator | c9f3dec92efb registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-17 16:33:31.092500 | orchestrator | c7217286ad0f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-17 16:33:31.092512 | orchestrator | 22419a9de489 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-17 16:33:31.092523 | orchestrator | f8714e058586 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-17 16:33:31.092534 | orchestrator | 6bba32d3eddb registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-17 16:33:31.092544 | orchestrator | 59f8f9b1046f registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-17 16:33:31.092556 | orchestrator | 43ac372f83dd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-17 16:33:31.092567 | orchestrator | 03d5e9fa399b registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-17 16:33:31.092578 | orchestrator | 7c98ab55d920 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-17 16:33:31.092589 | orchestrator | 51af9bbb4fe2 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-17 16:33:31.092599 | orchestrator | c53d8384a4b2 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-17 16:33:31.092610 | orchestrator | 44009dbe8651 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-17 16:33:31.092621 | orchestrator | d710e41fa8df registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-17 16:33:31.092640 | orchestrator | 7e1ce90cc5ac registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-17 16:33:31.092652 | orchestrator | 88074035ab24 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-09-17 16:33:31.092662 | orchestrator | ad0dcd180268 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-17 16:33:31.092673 | orchestrator | fe1ebf327b15 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-17 16:33:31.092684 | orchestrator | 93e4e99a6a84 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-17 16:33:31.092695 | orchestrator | 5a9af3796d19 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-09-17 16:33:31.092706 | orchestrator | 514849c06a37 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-17 16:33:31.092723 | orchestrator | 82d0d02e7729 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-17 16:33:31.092734 | orchestrator | d2cbf455a7e7 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-17 16:33:31.092745 | orchestrator | a99035bfb10a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-17 16:33:31.092756 | orchestrator | effa9eae06c9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-09-17 16:33:31.092772 | orchestrator | e26022c23888 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-17 16:33:31.092783 | orchestrator | 82f1bf9aefb5 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-17 16:33:31.092794 | orchestrator | 54fc6a9b9656 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-17 16:33:31.092805 | orchestrator | 36db24b8c263 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-17 16:33:31.092816 | orchestrator | 0780d9d71fca registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-17 16:33:31.092827 | orchestrator | 620640b2cc54 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-17 16:33:31.092838 | orchestrator | 763162020cfb registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-17 16:33:31.092849 | orchestrator | e7b2ff2aa553 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-17 16:33:31.286072 | orchestrator | 2025-09-17 16:33:31.286151 | orchestrator | ## Images @ testbed-node-1 2025-09-17 16:33:31.286166 | orchestrator | 2025-09-17 16:33:31.286178 | orchestrator | + echo 2025-09-17 16:33:31.286190 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-17 16:33:31.286202 | orchestrator | + echo 2025-09-17 16:33:31.286213 | orchestrator | + osism container testbed-node-1 images 2025-09-17 16:33:33.285821 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-17 16:33:33.285926 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-17 16:33:33.285940 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-17 16:33:33.285997 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-17 16:33:33.286010 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-17 16:33:33.286068 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-17 16:33:33.286080 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-17 16:33:33.286090 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-17 16:33:33.286102 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-17 16:33:33.286112 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-17 16:33:33.286123 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-17 16:33:33.286133 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-17 16:33:33.286144 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-17 16:33:33.286154 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-17 16:33:33.286165 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-17 16:33:33.286176 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-17 16:33:33.286187 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-17 16:33:33.286197 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-17 16:33:33.286208 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-17 16:33:33.286219 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-17 16:33:33.286230 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-17 16:33:33.286241 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-17 16:33:33.286251 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-17 16:33:33.286262 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-17 16:33:33.286295 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-17 16:33:33.286306 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-17 16:33:33.286317 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-17 16:33:33.286344 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-17 16:33:33.286356 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-17 16:33:33.286368 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-17 16:33:33.286380 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-17 16:33:33.286392 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-17 16:33:33.286426 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-17 16:33:33.286438 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-17 16:33:33.286450 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-17 16:33:33.286462 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-17 16:33:33.286474 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-17 16:33:33.286486 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-17 16:33:33.286497 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-17 16:33:33.286515 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-17 16:33:33.286527 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-17 16:33:33.286539 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-17 16:33:33.286551 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-17 16:33:33.286563 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-17 16:33:33.286575 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-17 16:33:33.286587 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-17 16:33:33.286599 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-17 16:33:33.286611 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-17 16:33:33.286623 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-17 16:33:33.286635 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-17 16:33:33.286654 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-17 16:33:33.539567 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-17 16:33:33.539737 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-17 16:33:33.590418 | orchestrator | 2025-09-17 16:33:33.590444 | orchestrator | ## Containers @ testbed-node-2 2025-09-17 16:33:33.590456 | orchestrator | 2025-09-17 16:33:33.590467 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-17 16:33:33.590478 | orchestrator | + echo 2025-09-17 16:33:33.590490 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-17 16:33:33.590501 | orchestrator | + echo 2025-09-17 16:33:33.590512 | orchestrator | + osism container testbed-node-2 ps 2025-09-17 16:33:35.786927 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-17 16:33:35.787050 | orchestrator | 34985e4c0b89 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-17 16:33:35.787064 | orchestrator | 9ce8d0ac3daf registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-17 16:33:35.787074 | orchestrator | 99d554e08648 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-17 16:33:35.787083 | orchestrator | 9921f75c9057 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-17 16:33:35.787092 | orchestrator | 9e3782a50898 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-17 16:33:35.787101 | orchestrator | 4a98cc2bddf4 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-17 16:33:35.787110 | orchestrator | bdb764268844 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-17 16:33:35.788028 | orchestrator | 71e11d81ac5f registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-17 16:33:35.788044 | orchestrator | a5a93b877d07 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-09-17 16:33:35.788055 | orchestrator | c246d2e636d5 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-17 16:33:35.788064 | orchestrator | fbb218387f0d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-09-17 16:33:35.788091 | orchestrator | 28173390b3f5 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-09-17 16:33:35.788101 | orchestrator | a56b7636e747 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-09-17 16:33:35.789450 | orchestrator | e9f786950f03 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-17 16:33:35.789484 | orchestrator | cea3b6e0fc1f registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-17 16:33:35.789494 | orchestrator | 24e2efc258a6 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2025-09-17 16:33:35.789503 | orchestrator | 88971ad01a15 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-17 16:33:35.789512 | orchestrator | c3d20e76c0f5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-17 16:33:35.789521 | orchestrator | 614acd730593 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-17 16:33:35.789530 | orchestrator | 49ff6259a2da registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-17 16:33:35.789538 | orchestrator | 6d170e87d9e1 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-17 16:33:35.789547 | orchestrator | a6714259a534 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-17 16:33:35.789556 | orchestrator | 610df25c5f47 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-09-17 16:33:35.789565 | orchestrator | f70d111f3059 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-17 16:33:35.789574 | orchestrator | 98b73189f5ca registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-17 16:33:35.789583 | orchestrator | d3e768b7559e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-17 16:33:35.789592 | orchestrator | 69937b8f7669 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-17 16:33:35.789600 | orchestrator | 002b4c9b1857 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-17 16:33:35.789609 | orchestrator | 5253f06501ed registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-17 16:33:35.789618 | orchestrator | ba2d00763c90 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-17 16:33:35.789627 | orchestrator | c189496186ee registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-17 16:33:35.789636 | orchestrator | 913db636b1d7 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-17 16:33:35.789650 | orchestrator | 62b4e348240a registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-17 16:33:35.789659 | orchestrator | 15f1fcdfe69b registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-17 16:33:35.789677 | orchestrator | 514346d55ff7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-09-17 16:33:35.789686 | orchestrator | 20805db69a3a registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-17 16:33:35.789695 | orchestrator | 76a7fa25265a registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-17 16:33:35.789704 | orchestrator | de0ce4d9002c registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-17 16:33:35.789713 | orchestrator | e2535cf683cf registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-09-17 16:33:35.789722 | orchestrator | 7494d41d5060 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-09-17 16:33:35.789731 | orchestrator | f50f3c45ec65 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-09-17 16:33:35.789740 | orchestrator | 7886d0be2f1a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-17 16:33:35.789749 | orchestrator | 66178a711a09 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-17 16:33:35.789758 | orchestrator | e957fc642e79 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-09-17 16:33:35.789774 | orchestrator | b193adbb94fe registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-17 16:33:35.789783 | orchestrator | 97e9a8dfeaa0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-17 16:33:35.789792 | orchestrator | 28af7c8f0143 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-17 16:33:35.789801 | orchestrator | 2b5d50405680 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-17 16:33:35.789810 | orchestrator | e48798cfc831 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-17 16:33:35.789818 | orchestrator | 551c27202340 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-17 16:33:35.789827 | orchestrator | 46e8121cef7e registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-17 16:33:35.789842 | orchestrator | a84120827cec registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-17 16:33:36.031347 | orchestrator | 2025-09-17 16:33:36.031427 | orchestrator | ## Images @ testbed-node-2 2025-09-17 16:33:36.031439 | orchestrator | 2025-09-17 16:33:36.031451 | orchestrator | + echo 2025-09-17 16:33:36.031463 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-17 16:33:36.031475 | orchestrator | + echo 2025-09-17 16:33:36.031486 | orchestrator | + osism container testbed-node-2 images 2025-09-17 16:33:38.177584 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-17 16:33:38.177678 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-17 16:33:38.177691 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-17 16:33:38.177703 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-17 16:33:38.177714 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-17 16:33:38.177724 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-17 16:33:38.177735 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-17 16:33:38.177746 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-17 16:33:38.177756 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-17 16:33:38.177767 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-17 16:33:38.177778 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-17 16:33:38.177788 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-17 16:33:38.177799 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-17 16:33:38.177810 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-17 16:33:38.177822 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-17 16:33:38.177833 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-17 16:33:38.177843 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-17 16:33:38.177854 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-17 16:33:38.177864 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-17 16:33:38.177875 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-17 16:33:38.177886 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-17 16:33:38.177897 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-17 16:33:38.177907 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-17 16:33:38.177937 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-17 16:33:38.177993 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-17 16:33:38.178007 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-17 16:33:38.178068 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-17 16:33:38.178079 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-17 16:33:38.178090 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-17 16:33:38.178101 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-17 16:33:38.178113 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-17 16:33:38.178125 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-17 16:33:38.178157 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-17 16:33:38.178170 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-17 16:33:38.178183 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-17 16:33:38.178195 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-17 16:33:38.178207 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-17 16:33:38.178219 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-17 16:33:38.178231 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-17 16:33:38.178243 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-17 16:33:38.178254 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-17 16:33:38.178265 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-17 16:33:38.178276 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-17 16:33:38.178286 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-17 16:33:38.178306 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-17 16:33:38.178317 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-17 16:33:38.178328 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-17 16:33:38.178339 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-17 16:33:38.178349 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-17 16:33:38.178368 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-17 16:33:38.178379 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-17 16:33:38.427573 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-17 16:33:38.434100 | orchestrator | + set -e 2025-09-17 16:33:38.434124 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 16:33:38.435205 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 16:33:38.435226 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 16:33:38.435238 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 16:33:38.435249 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 16:33:38.435260 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 16:33:38.435272 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 16:33:38.435283 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 16:33:38.435294 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 16:33:38.435305 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 16:33:38.435315 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 16:33:38.435326 | orchestrator | ++ export ARA=false 2025-09-17 16:33:38.435337 | orchestrator | ++ ARA=false 2025-09-17 16:33:38.435349 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 16:33:38.435360 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 16:33:38.435371 | orchestrator | ++ export TEMPEST=false 2025-09-17 16:33:38.435381 | orchestrator | ++ TEMPEST=false 2025-09-17 16:33:38.435392 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 16:33:38.435403 | orchestrator | ++ IS_ZUUL=true 2025-09-17 16:33:38.435414 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 16:33:38.435425 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 16:33:38.435436 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 16:33:38.435446 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 16:33:38.435457 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 16:33:38.435467 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 16:33:38.435478 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 16:33:38.435489 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 16:33:38.435500 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 16:33:38.435510 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 16:33:38.435521 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-17 16:33:38.435532 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-17 16:33:38.445477 | orchestrator | + set -e 2025-09-17 16:33:38.445498 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 16:33:38.446905 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 16:33:38.446923 | orchestrator | ++ INTERACTIVE=false 2025-09-17 16:33:38.446934 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 16:33:38.446944 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 16:33:38.446985 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-17 16:33:38.448419 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-17 16:33:38.451571 | orchestrator | 2025-09-17 16:33:38.451592 | orchestrator | # Ceph status 2025-09-17 16:33:38.451603 | orchestrator | 2025-09-17 16:33:38.451614 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 16:33:38.451630 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 16:33:38.451641 | orchestrator | + echo 2025-09-17 16:33:38.451652 | orchestrator | + echo '# Ceph status' 2025-09-17 16:33:38.451663 | orchestrator | + echo 2025-09-17 16:33:38.451674 | orchestrator | + ceph -s 2025-09-17 16:33:38.993863 | orchestrator | cluster: 2025-09-17 16:33:38.993935 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-17 16:33:38.993947 | orchestrator | health: HEALTH_OK 2025-09-17 16:33:38.994003 | orchestrator | 2025-09-17 16:33:38.994015 | orchestrator | services: 2025-09-17 16:33:38.994079 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-09-17 16:33:38.994093 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-09-17 16:33:38.994105 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-17 16:33:38.994116 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-09-17 16:33:38.994127 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-17 16:33:38.994138 | orchestrator | 2025-09-17 16:33:38.994150 | orchestrator | data: 2025-09-17 16:33:38.994161 | orchestrator | volumes: 1/1 healthy 2025-09-17 16:33:38.994188 | orchestrator | pools: 14 pools, 401 pgs 2025-09-17 16:33:38.994227 | orchestrator | objects: 523 objects, 2.2 GiB 2025-09-17 16:33:38.994239 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-17 16:33:38.994249 | orchestrator | pgs: 401 active+clean 2025-09-17 16:33:38.994260 | orchestrator | 2025-09-17 16:33:39.047461 | orchestrator | 2025-09-17 16:33:39.047523 | orchestrator | # Ceph versions 2025-09-17 16:33:39.047538 | orchestrator | 2025-09-17 16:33:39.047551 | orchestrator | + echo 2025-09-17 16:33:39.047563 | orchestrator | + echo '# Ceph versions' 2025-09-17 16:33:39.047575 | orchestrator | + echo 2025-09-17 16:33:39.047586 | orchestrator | + ceph versions 2025-09-17 16:33:39.613734 | orchestrator | { 2025-09-17 16:33:39.613818 | orchestrator | "mon": { 2025-09-17 16:33:39.613832 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-17 16:33:39.613844 | orchestrator | }, 2025-09-17 16:33:39.613855 | orchestrator | "mgr": { 2025-09-17 16:33:39.613866 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-17 16:33:39.613877 | orchestrator | }, 2025-09-17 16:33:39.613888 | orchestrator | "osd": { 2025-09-17 16:33:39.613899 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-17 16:33:39.613909 | orchestrator | }, 2025-09-17 16:33:39.613920 | orchestrator | "mds": { 2025-09-17 16:33:39.613931 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-17 16:33:39.613942 | orchestrator | }, 2025-09-17 16:33:39.614002 | orchestrator | "rgw": { 2025-09-17 16:33:39.614015 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-17 16:33:39.614071 | orchestrator | }, 2025-09-17 16:33:39.614083 | orchestrator | "overall": { 2025-09-17 16:33:39.614094 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-17 16:33:39.614105 | orchestrator | } 2025-09-17 16:33:39.614116 | orchestrator | } 2025-09-17 16:33:39.655897 | orchestrator | 2025-09-17 16:33:39.655944 | orchestrator | # Ceph OSD tree 2025-09-17 16:33:39.655989 | orchestrator | 2025-09-17 16:33:39.656001 | orchestrator | + echo 2025-09-17 16:33:39.656013 | orchestrator | + echo '# Ceph OSD tree' 2025-09-17 16:33:39.656025 | orchestrator | + echo 2025-09-17 16:33:39.656036 | orchestrator | + ceph osd df tree 2025-09-17 16:33:40.162564 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-17 16:33:40.162658 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 434 MiB 113 GiB 5.92 1.00 - root default 2025-09-17 16:33:40.162670 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.93 1.00 - host testbed-node-3 2025-09-17 16:33:40.162682 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.02 1.19 200 up osd.0 2025-09-17 16:33:40.162693 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 989 MiB 915 MiB 1 KiB 74 MiB 19 GiB 4.83 0.82 190 up osd.4 2025-09-17 16:33:40.162704 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-17 16:33:40.162714 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.36 0.91 192 up osd.1 2025-09-17 16:33:40.162725 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.47 1.09 200 up osd.5 2025-09-17 16:33:40.162736 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-17 16:33:40.162747 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.36 1.24 189 up osd.2 2025-09-17 16:33:40.162757 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 917 MiB 843 MiB 1 KiB 74 MiB 19 GiB 4.48 0.76 199 up osd.3 2025-09-17 16:33:40.162768 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 434 MiB 113 GiB 5.92 2025-09-17 16:33:40.162779 | orchestrator | MIN/MAX VAR: 0.76/1.24 STDDEV: 1.09 2025-09-17 16:33:40.215201 | orchestrator | 2025-09-17 16:33:40.215259 | orchestrator | # Ceph monitor status 2025-09-17 16:33:40.215271 | orchestrator | 2025-09-17 16:33:40.215283 | orchestrator | + echo 2025-09-17 16:33:40.215294 | orchestrator | + echo '# Ceph monitor status' 2025-09-17 16:33:40.215305 | orchestrator | + echo 2025-09-17 16:33:40.215316 | orchestrator | + ceph mon stat 2025-09-17 16:33:40.771498 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-17 16:33:40.812178 | orchestrator | 2025-09-17 16:33:40.812930 | orchestrator | # Ceph quorum status 2025-09-17 16:33:40.812979 | orchestrator | 2025-09-17 16:33:40.812992 | orchestrator | + echo 2025-09-17 16:33:40.813003 | orchestrator | + echo '# Ceph quorum status' 2025-09-17 16:33:40.813015 | orchestrator | + echo 2025-09-17 16:33:40.813036 | orchestrator | + ceph quorum_status 2025-09-17 16:33:40.813047 | orchestrator | + jq 2025-09-17 16:33:41.442136 | orchestrator | { 2025-09-17 16:33:41.442222 | orchestrator | "election_epoch": 8, 2025-09-17 16:33:41.442234 | orchestrator | "quorum": [ 2025-09-17 16:33:41.442246 | orchestrator | 0, 2025-09-17 16:33:41.442257 | orchestrator | 1, 2025-09-17 16:33:41.442268 | orchestrator | 2 2025-09-17 16:33:41.442279 | orchestrator | ], 2025-09-17 16:33:41.442290 | orchestrator | "quorum_names": [ 2025-09-17 16:33:41.442301 | orchestrator | "testbed-node-0", 2025-09-17 16:33:41.442312 | orchestrator | "testbed-node-1", 2025-09-17 16:33:41.442323 | orchestrator | "testbed-node-2" 2025-09-17 16:33:41.442334 | orchestrator | ], 2025-09-17 16:33:41.442344 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-17 16:33:41.442356 | orchestrator | "quorum_age": 1669, 2025-09-17 16:33:41.442367 | orchestrator | "features": { 2025-09-17 16:33:41.442378 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-17 16:33:41.442389 | orchestrator | "quorum_mon": [ 2025-09-17 16:33:41.442400 | orchestrator | "kraken", 2025-09-17 16:33:41.442411 | orchestrator | "luminous", 2025-09-17 16:33:41.442421 | orchestrator | "mimic", 2025-09-17 16:33:41.442432 | orchestrator | "osdmap-prune", 2025-09-17 16:33:41.442443 | orchestrator | "nautilus", 2025-09-17 16:33:41.442453 | orchestrator | "octopus", 2025-09-17 16:33:41.442464 | orchestrator | "pacific", 2025-09-17 16:33:41.442475 | orchestrator | "elector-pinging", 2025-09-17 16:33:41.442485 | orchestrator | "quincy", 2025-09-17 16:33:41.442496 | orchestrator | "reef" 2025-09-17 16:33:41.442507 | orchestrator | ] 2025-09-17 16:33:41.442517 | orchestrator | }, 2025-09-17 16:33:41.442528 | orchestrator | "monmap": { 2025-09-17 16:33:41.442539 | orchestrator | "epoch": 1, 2025-09-17 16:33:41.442549 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-17 16:33:41.442561 | orchestrator | "modified": "2025-09-17T16:05:36.253835Z", 2025-09-17 16:33:41.442572 | orchestrator | "created": "2025-09-17T16:05:36.253835Z", 2025-09-17 16:33:41.442582 | orchestrator | "min_mon_release": 18, 2025-09-17 16:33:41.442593 | orchestrator | "min_mon_release_name": "reef", 2025-09-17 16:33:41.442606 | orchestrator | "election_strategy": 1, 2025-09-17 16:33:41.442617 | orchestrator | "disallowed_leaders: ": "", 2025-09-17 16:33:41.442630 | orchestrator | "stretch_mode": false, 2025-09-17 16:33:41.442642 | orchestrator | "tiebreaker_mon": "", 2025-09-17 16:33:41.442653 | orchestrator | "removed_ranks: ": "", 2025-09-17 16:33:41.442665 | orchestrator | "features": { 2025-09-17 16:33:41.442676 | orchestrator | "persistent": [ 2025-09-17 16:33:41.442687 | orchestrator | "kraken", 2025-09-17 16:33:41.442699 | orchestrator | "luminous", 2025-09-17 16:33:41.442711 | orchestrator | "mimic", 2025-09-17 16:33:41.442723 | orchestrator | "osdmap-prune", 2025-09-17 16:33:41.442735 | orchestrator | "nautilus", 2025-09-17 16:33:41.442747 | orchestrator | "octopus", 2025-09-17 16:33:41.442758 | orchestrator | "pacific", 2025-09-17 16:33:41.442770 | orchestrator | "elector-pinging", 2025-09-17 16:33:41.442781 | orchestrator | "quincy", 2025-09-17 16:33:41.442793 | orchestrator | "reef" 2025-09-17 16:33:41.442805 | orchestrator | ], 2025-09-17 16:33:41.442816 | orchestrator | "optional": [] 2025-09-17 16:33:41.442828 | orchestrator | }, 2025-09-17 16:33:41.442840 | orchestrator | "mons": [ 2025-09-17 16:33:41.442852 | orchestrator | { 2025-09-17 16:33:41.442863 | orchestrator | "rank": 0, 2025-09-17 16:33:41.442875 | orchestrator | "name": "testbed-node-0", 2025-09-17 16:33:41.442887 | orchestrator | "public_addrs": { 2025-09-17 16:33:41.442898 | orchestrator | "addrvec": [ 2025-09-17 16:33:41.442909 | orchestrator | { 2025-09-17 16:33:41.442921 | orchestrator | "type": "v2", 2025-09-17 16:33:41.442978 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-17 16:33:41.442990 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443001 | orchestrator | }, 2025-09-17 16:33:41.443012 | orchestrator | { 2025-09-17 16:33:41.443022 | orchestrator | "type": "v1", 2025-09-17 16:33:41.443033 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-17 16:33:41.443043 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443054 | orchestrator | } 2025-09-17 16:33:41.443065 | orchestrator | ] 2025-09-17 16:33:41.443075 | orchestrator | }, 2025-09-17 16:33:41.443086 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-17 16:33:41.443097 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-17 16:33:41.443107 | orchestrator | "priority": 0, 2025-09-17 16:33:41.443118 | orchestrator | "weight": 0, 2025-09-17 16:33:41.443129 | orchestrator | "crush_location": "{}" 2025-09-17 16:33:41.443139 | orchestrator | }, 2025-09-17 16:33:41.443150 | orchestrator | { 2025-09-17 16:33:41.443161 | orchestrator | "rank": 1, 2025-09-17 16:33:41.443171 | orchestrator | "name": "testbed-node-1", 2025-09-17 16:33:41.443182 | orchestrator | "public_addrs": { 2025-09-17 16:33:41.443193 | orchestrator | "addrvec": [ 2025-09-17 16:33:41.443204 | orchestrator | { 2025-09-17 16:33:41.443214 | orchestrator | "type": "v2", 2025-09-17 16:33:41.443225 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-17 16:33:41.443236 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443247 | orchestrator | }, 2025-09-17 16:33:41.443257 | orchestrator | { 2025-09-17 16:33:41.443268 | orchestrator | "type": "v1", 2025-09-17 16:33:41.443278 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-17 16:33:41.443289 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443300 | orchestrator | } 2025-09-17 16:33:41.443311 | orchestrator | ] 2025-09-17 16:33:41.443321 | orchestrator | }, 2025-09-17 16:33:41.443332 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-17 16:33:41.443343 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-17 16:33:41.443353 | orchestrator | "priority": 0, 2025-09-17 16:33:41.443364 | orchestrator | "weight": 0, 2025-09-17 16:33:41.443374 | orchestrator | "crush_location": "{}" 2025-09-17 16:33:41.443385 | orchestrator | }, 2025-09-17 16:33:41.443396 | orchestrator | { 2025-09-17 16:33:41.443423 | orchestrator | "rank": 2, 2025-09-17 16:33:41.443434 | orchestrator | "name": "testbed-node-2", 2025-09-17 16:33:41.443445 | orchestrator | "public_addrs": { 2025-09-17 16:33:41.443456 | orchestrator | "addrvec": [ 2025-09-17 16:33:41.443466 | orchestrator | { 2025-09-17 16:33:41.443477 | orchestrator | "type": "v2", 2025-09-17 16:33:41.443488 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-17 16:33:41.443498 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443509 | orchestrator | }, 2025-09-17 16:33:41.443519 | orchestrator | { 2025-09-17 16:33:41.443530 | orchestrator | "type": "v1", 2025-09-17 16:33:41.443541 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-17 16:33:41.443552 | orchestrator | "nonce": 0 2025-09-17 16:33:41.443562 | orchestrator | } 2025-09-17 16:33:41.443573 | orchestrator | ] 2025-09-17 16:33:41.443584 | orchestrator | }, 2025-09-17 16:33:41.443594 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-17 16:33:41.443605 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-17 16:33:41.443616 | orchestrator | "priority": 0, 2025-09-17 16:33:41.443627 | orchestrator | "weight": 0, 2025-09-17 16:33:41.443637 | orchestrator | "crush_location": "{}" 2025-09-17 16:33:41.443648 | orchestrator | } 2025-09-17 16:33:41.443658 | orchestrator | ] 2025-09-17 16:33:41.443669 | orchestrator | } 2025-09-17 16:33:41.443680 | orchestrator | } 2025-09-17 16:33:41.443702 | orchestrator | 2025-09-17 16:33:41.443713 | orchestrator | + echo 2025-09-17 16:33:41.443724 | orchestrator | # Ceph free space status 2025-09-17 16:33:41.443735 | orchestrator | 2025-09-17 16:33:41.443746 | orchestrator | + echo '# Ceph free space status' 2025-09-17 16:33:41.443757 | orchestrator | + echo 2025-09-17 16:33:41.443767 | orchestrator | + ceph df 2025-09-17 16:33:42.037441 | orchestrator | --- RAW STORAGE --- 2025-09-17 16:33:42.037538 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-17 16:33:42.037567 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-17 16:33:42.037580 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-17 16:33:42.037592 | orchestrator | 2025-09-17 16:33:42.037604 | orchestrator | --- POOLS --- 2025-09-17 16:33:42.037644 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-17 16:33:42.037658 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-17 16:33:42.037669 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-17 16:33:42.037681 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-17 16:33:42.037692 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-17 16:33:42.037703 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-17 16:33:42.037714 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-17 16:33:42.037725 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-17 16:33:42.037736 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-17 16:33:42.037747 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 53 GiB 2025-09-17 16:33:42.037758 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-17 16:33:42.037769 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-17 16:33:42.037780 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-09-17 16:33:42.037791 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-17 16:33:42.037802 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-17 16:33:42.081305 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-17 16:33:42.141200 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-17 16:33:42.141244 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-17 16:33:42.141257 | orchestrator | + osism apply facts 2025-09-17 16:33:54.089461 | orchestrator | 2025-09-17 16:33:54 | INFO  | Task 31d847d4-509b-461f-a189-0cabbc8cba6a (facts) was prepared for execution. 2025-09-17 16:33:54.089571 | orchestrator | 2025-09-17 16:33:54 | INFO  | It takes a moment until task 31d847d4-509b-461f-a189-0cabbc8cba6a (facts) has been started and output is visible here. 2025-09-17 16:34:06.784607 | orchestrator | 2025-09-17 16:34:06.784720 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-17 16:34:06.784736 | orchestrator | 2025-09-17 16:34:06.784748 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-17 16:34:06.784760 | orchestrator | Wednesday 17 September 2025 16:33:58 +0000 (0:00:00.277) 0:00:00.277 *** 2025-09-17 16:34:06.784772 | orchestrator | ok: [testbed-manager] 2025-09-17 16:34:06.784784 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:06.784795 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:06.784824 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:06.784835 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:34:06.784846 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:34:06.784856 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:34:06.784867 | orchestrator | 2025-09-17 16:34:06.784878 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-17 16:34:06.784889 | orchestrator | Wednesday 17 September 2025 16:33:59 +0000 (0:00:01.471) 0:00:01.748 *** 2025-09-17 16:34:06.784900 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:34:06.784912 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:06.784923 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:34:06.784934 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:34:06.784944 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:34:06.785009 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:34:06.785022 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:34:06.785033 | orchestrator | 2025-09-17 16:34:06.785044 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-17 16:34:06.785055 | orchestrator | 2025-09-17 16:34:06.785066 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-17 16:34:06.785077 | orchestrator | Wednesday 17 September 2025 16:34:00 +0000 (0:00:01.237) 0:00:02.986 *** 2025-09-17 16:34:06.785118 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:06.785139 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:06.785158 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:06.785179 | orchestrator | ok: [testbed-manager] 2025-09-17 16:34:06.785192 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:34:06.785205 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:34:06.785216 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:34:06.785228 | orchestrator | 2025-09-17 16:34:06.785241 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-17 16:34:06.785254 | orchestrator | 2025-09-17 16:34:06.785266 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-17 16:34:06.785279 | orchestrator | Wednesday 17 September 2025 16:34:05 +0000 (0:00:05.105) 0:00:08.092 *** 2025-09-17 16:34:06.785291 | orchestrator | skipping: [testbed-manager] 2025-09-17 16:34:06.785304 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:06.785315 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:34:06.785326 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:34:06.785337 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:34:06.785347 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:34:06.785358 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:34:06.785368 | orchestrator | 2025-09-17 16:34:06.785379 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:34:06.785390 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785402 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785413 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785424 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785435 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785446 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785456 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:06.785467 | orchestrator | 2025-09-17 16:34:06.785478 | orchestrator | 2025-09-17 16:34:06.785489 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:34:06.785499 | orchestrator | Wednesday 17 September 2025 16:34:06 +0000 (0:00:00.517) 0:00:08.610 *** 2025-09-17 16:34:06.785556 | orchestrator | =============================================================================== 2025-09-17 16:34:06.785570 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2025-09-17 16:34:06.785581 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.47s 2025-09-17 16:34:06.785592 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-09-17 16:34:06.785602 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-17 16:34:07.016344 | orchestrator | + osism validate ceph-mons 2025-09-17 16:34:38.015761 | orchestrator | 2025-09-17 16:34:38.015834 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-17 16:34:38.015841 | orchestrator | 2025-09-17 16:34:38.015845 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-17 16:34:38.015850 | orchestrator | Wednesday 17 September 2025 16:34:23 +0000 (0:00:00.414) 0:00:00.414 *** 2025-09-17 16:34:38.015855 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.015873 | orchestrator | 2025-09-17 16:34:38.015878 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-17 16:34:38.015881 | orchestrator | Wednesday 17 September 2025 16:34:23 +0000 (0:00:00.598) 0:00:01.013 *** 2025-09-17 16:34:38.015886 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.015889 | orchestrator | 2025-09-17 16:34:38.015893 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-17 16:34:38.015897 | orchestrator | Wednesday 17 September 2025 16:34:24 +0000 (0:00:00.847) 0:00:01.860 *** 2025-09-17 16:34:38.015901 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.015906 | orchestrator | 2025-09-17 16:34:38.015910 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-17 16:34:38.015913 | orchestrator | Wednesday 17 September 2025 16:34:24 +0000 (0:00:00.208) 0:00:02.068 *** 2025-09-17 16:34:38.015917 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.015921 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:38.015925 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:38.015929 | orchestrator | 2025-09-17 16:34:38.015933 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-17 16:34:38.015937 | orchestrator | Wednesday 17 September 2025 16:34:24 +0000 (0:00:00.260) 0:00:02.329 *** 2025-09-17 16:34:38.015941 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.015945 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:38.015949 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:38.015953 | orchestrator | 2025-09-17 16:34:38.015956 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-17 16:34:38.015995 | orchestrator | Wednesday 17 September 2025 16:34:25 +0000 (0:00:01.031) 0:00:03.361 *** 2025-09-17 16:34:38.016000 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016004 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:34:38.016007 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:34:38.016011 | orchestrator | 2025-09-17 16:34:38.016015 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-17 16:34:38.016019 | orchestrator | Wednesday 17 September 2025 16:34:26 +0000 (0:00:00.265) 0:00:03.626 *** 2025-09-17 16:34:38.016023 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016027 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:38.016030 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:38.016034 | orchestrator | 2025-09-17 16:34:38.016038 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:34:38.016042 | orchestrator | Wednesday 17 September 2025 16:34:26 +0000 (0:00:00.437) 0:00:04.064 *** 2025-09-17 16:34:38.016046 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016049 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:38.016053 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:38.016057 | orchestrator | 2025-09-17 16:34:38.016061 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-17 16:34:38.016065 | orchestrator | Wednesday 17 September 2025 16:34:26 +0000 (0:00:00.288) 0:00:04.352 *** 2025-09-17 16:34:38.016069 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016072 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:34:38.016076 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:34:38.016080 | orchestrator | 2025-09-17 16:34:38.016084 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-17 16:34:38.016088 | orchestrator | Wednesday 17 September 2025 16:34:27 +0000 (0:00:00.251) 0:00:04.603 *** 2025-09-17 16:34:38.016092 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016095 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:34:38.016099 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:34:38.016103 | orchestrator | 2025-09-17 16:34:38.016107 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:34:38.016111 | orchestrator | Wednesday 17 September 2025 16:34:27 +0000 (0:00:00.300) 0:00:04.904 *** 2025-09-17 16:34:38.016114 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016118 | orchestrator | 2025-09-17 16:34:38.016122 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:34:38.016130 | orchestrator | Wednesday 17 September 2025 16:34:27 +0000 (0:00:00.229) 0:00:05.134 *** 2025-09-17 16:34:38.016147 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016151 | orchestrator | 2025-09-17 16:34:38.016155 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:34:38.016159 | orchestrator | Wednesday 17 September 2025 16:34:28 +0000 (0:00:00.577) 0:00:05.712 *** 2025-09-17 16:34:38.016162 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016166 | orchestrator | 2025-09-17 16:34:38.016170 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:38.016174 | orchestrator | Wednesday 17 September 2025 16:34:28 +0000 (0:00:00.244) 0:00:05.957 *** 2025-09-17 16:34:38.016177 | orchestrator | 2025-09-17 16:34:38.016181 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:38.016185 | orchestrator | Wednesday 17 September 2025 16:34:28 +0000 (0:00:00.063) 0:00:06.021 *** 2025-09-17 16:34:38.016189 | orchestrator | 2025-09-17 16:34:38.016192 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:38.016196 | orchestrator | Wednesday 17 September 2025 16:34:28 +0000 (0:00:00.065) 0:00:06.086 *** 2025-09-17 16:34:38.016200 | orchestrator | 2025-09-17 16:34:38.016204 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:34:38.016208 | orchestrator | Wednesday 17 September 2025 16:34:28 +0000 (0:00:00.069) 0:00:06.156 *** 2025-09-17 16:34:38.016212 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016215 | orchestrator | 2025-09-17 16:34:38.016219 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-17 16:34:38.016223 | orchestrator | Wednesday 17 September 2025 16:34:29 +0000 (0:00:00.252) 0:00:06.409 *** 2025-09-17 16:34:38.016227 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016231 | orchestrator | 2025-09-17 16:34:38.016243 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-17 16:34:38.016247 | orchestrator | Wednesday 17 September 2025 16:34:29 +0000 (0:00:00.231) 0:00:06.640 *** 2025-09-17 16:34:38.016251 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016255 | orchestrator | 2025-09-17 16:34:38.016259 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-17 16:34:38.016262 | orchestrator | Wednesday 17 September 2025 16:34:29 +0000 (0:00:00.109) 0:00:06.749 *** 2025-09-17 16:34:38.016266 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:34:38.016270 | orchestrator | 2025-09-17 16:34:38.016274 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-17 16:34:38.016277 | orchestrator | Wednesday 17 September 2025 16:34:30 +0000 (0:00:01.628) 0:00:08.378 *** 2025-09-17 16:34:38.016281 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016285 | orchestrator | 2025-09-17 16:34:38.016289 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-17 16:34:38.016292 | orchestrator | Wednesday 17 September 2025 16:34:31 +0000 (0:00:00.290) 0:00:08.668 *** 2025-09-17 16:34:38.016296 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016300 | orchestrator | 2025-09-17 16:34:38.016306 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-17 16:34:38.016310 | orchestrator | Wednesday 17 September 2025 16:34:31 +0000 (0:00:00.118) 0:00:08.787 *** 2025-09-17 16:34:38.016314 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016318 | orchestrator | 2025-09-17 16:34:38.016321 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-17 16:34:38.016325 | orchestrator | Wednesday 17 September 2025 16:34:31 +0000 (0:00:00.300) 0:00:09.087 *** 2025-09-17 16:34:38.016329 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016333 | orchestrator | 2025-09-17 16:34:38.016337 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-17 16:34:38.016342 | orchestrator | Wednesday 17 September 2025 16:34:32 +0000 (0:00:00.651) 0:00:09.738 *** 2025-09-17 16:34:38.016349 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016353 | orchestrator | 2025-09-17 16:34:38.016358 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-17 16:34:38.016362 | orchestrator | Wednesday 17 September 2025 16:34:32 +0000 (0:00:00.110) 0:00:09.849 *** 2025-09-17 16:34:38.016366 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016371 | orchestrator | 2025-09-17 16:34:38.016375 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-17 16:34:38.016379 | orchestrator | Wednesday 17 September 2025 16:34:32 +0000 (0:00:00.123) 0:00:09.972 *** 2025-09-17 16:34:38.016383 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016388 | orchestrator | 2025-09-17 16:34:38.016392 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-17 16:34:38.016396 | orchestrator | Wednesday 17 September 2025 16:34:32 +0000 (0:00:00.109) 0:00:10.081 *** 2025-09-17 16:34:38.016400 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:34:38.016404 | orchestrator | 2025-09-17 16:34:38.016408 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-17 16:34:38.016413 | orchestrator | Wednesday 17 September 2025 16:34:33 +0000 (0:00:01.258) 0:00:11.339 *** 2025-09-17 16:34:38.016417 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016421 | orchestrator | 2025-09-17 16:34:38.016425 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-17 16:34:38.016430 | orchestrator | Wednesday 17 September 2025 16:34:34 +0000 (0:00:00.285) 0:00:11.624 *** 2025-09-17 16:34:38.016434 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016438 | orchestrator | 2025-09-17 16:34:38.016442 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-17 16:34:38.016447 | orchestrator | Wednesday 17 September 2025 16:34:34 +0000 (0:00:00.137) 0:00:11.762 *** 2025-09-17 16:34:38.016451 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:34:38.016455 | orchestrator | 2025-09-17 16:34:38.016460 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-17 16:34:38.016464 | orchestrator | Wednesday 17 September 2025 16:34:34 +0000 (0:00:00.138) 0:00:11.901 *** 2025-09-17 16:34:38.016468 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016472 | orchestrator | 2025-09-17 16:34:38.016477 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-17 16:34:38.016481 | orchestrator | Wednesday 17 September 2025 16:34:34 +0000 (0:00:00.124) 0:00:12.025 *** 2025-09-17 16:34:38.016485 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016490 | orchestrator | 2025-09-17 16:34:38.016494 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-17 16:34:38.016498 | orchestrator | Wednesday 17 September 2025 16:34:34 +0000 (0:00:00.131) 0:00:12.157 *** 2025-09-17 16:34:38.016502 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.016507 | orchestrator | 2025-09-17 16:34:38.016511 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-17 16:34:38.016515 | orchestrator | Wednesday 17 September 2025 16:34:35 +0000 (0:00:00.445) 0:00:12.603 *** 2025-09-17 16:34:38.016519 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:34:38.016523 | orchestrator | 2025-09-17 16:34:38.016528 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:34:38.016532 | orchestrator | Wednesday 17 September 2025 16:34:35 +0000 (0:00:00.598) 0:00:13.202 *** 2025-09-17 16:34:38.016536 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.016540 | orchestrator | 2025-09-17 16:34:38.016545 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:34:38.016549 | orchestrator | Wednesday 17 September 2025 16:34:37 +0000 (0:00:01.493) 0:00:14.696 *** 2025-09-17 16:34:38.016553 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.016558 | orchestrator | 2025-09-17 16:34:38.016562 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:34:38.016566 | orchestrator | Wednesday 17 September 2025 16:34:37 +0000 (0:00:00.248) 0:00:14.945 *** 2025-09-17 16:34:38.016573 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:38.016577 | orchestrator | 2025-09-17 16:34:38.016584 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:39.980036 | orchestrator | Wednesday 17 September 2025 16:34:37 +0000 (0:00:00.249) 0:00:15.195 *** 2025-09-17 16:34:39.980131 | orchestrator | 2025-09-17 16:34:39.980145 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:39.980156 | orchestrator | Wednesday 17 September 2025 16:34:37 +0000 (0:00:00.066) 0:00:15.261 *** 2025-09-17 16:34:39.980166 | orchestrator | 2025-09-17 16:34:39.980177 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:34:39.980187 | orchestrator | Wednesday 17 September 2025 16:34:37 +0000 (0:00:00.066) 0:00:15.327 *** 2025-09-17 16:34:39.980197 | orchestrator | 2025-09-17 16:34:39.980207 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-17 16:34:39.980217 | orchestrator | Wednesday 17 September 2025 16:34:38 +0000 (0:00:00.070) 0:00:15.397 *** 2025-09-17 16:34:39.980227 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:34:39.980238 | orchestrator | 2025-09-17 16:34:39.980248 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:34:39.980258 | orchestrator | Wednesday 17 September 2025 16:34:39 +0000 (0:00:01.250) 0:00:16.648 *** 2025-09-17 16:34:39.980268 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-17 16:34:39.980278 | orchestrator |  "msg": [ 2025-09-17 16:34:39.980289 | orchestrator |  "Validator run completed.", 2025-09-17 16:34:39.980300 | orchestrator |  "You can find the report file here:", 2025-09-17 16:34:39.980311 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-17T16:34:23+00:00-report.json", 2025-09-17 16:34:39.980321 | orchestrator |  "on the following host:", 2025-09-17 16:34:39.980332 | orchestrator |  "testbed-manager" 2025-09-17 16:34:39.980341 | orchestrator |  ] 2025-09-17 16:34:39.980352 | orchestrator | } 2025-09-17 16:34:39.980362 | orchestrator | 2025-09-17 16:34:39.980372 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:34:39.980384 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-17 16:34:39.980395 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:39.980406 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:34:39.980416 | orchestrator | 2025-09-17 16:34:39.980426 | orchestrator | 2025-09-17 16:34:39.980440 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:34:39.980450 | orchestrator | Wednesday 17 September 2025 16:34:39 +0000 (0:00:00.378) 0:00:17.027 *** 2025-09-17 16:34:39.980460 | orchestrator | =============================================================================== 2025-09-17 16:34:39.980470 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.63s 2025-09-17 16:34:39.980480 | orchestrator | Aggregate test results step one ----------------------------------------- 1.49s 2025-09-17 16:34:39.980490 | orchestrator | Gather status data ------------------------------------------------------ 1.26s 2025-09-17 16:34:39.980500 | orchestrator | Write report file ------------------------------------------------------- 1.25s 2025-09-17 16:34:39.980510 | orchestrator | Get container info ------------------------------------------------------ 1.03s 2025-09-17 16:34:39.980520 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-09-17 16:34:39.980530 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.65s 2025-09-17 16:34:39.980540 | orchestrator | Get timestamp for report file ------------------------------------------- 0.60s 2025-09-17 16:34:39.980575 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.60s 2025-09-17 16:34:39.980587 | orchestrator | Aggregate test results step two ----------------------------------------- 0.58s 2025-09-17 16:34:39.980599 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.45s 2025-09-17 16:34:39.980610 | orchestrator | Set test result to passed if container is existing ---------------------- 0.44s 2025-09-17 16:34:39.980621 | orchestrator | Print report file information ------------------------------------------- 0.38s 2025-09-17 16:34:39.980632 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-09-17 16:34:39.980644 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2025-09-17 16:34:39.980655 | orchestrator | Set quorum test data ---------------------------------------------------- 0.29s 2025-09-17 16:34:39.980666 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-09-17 16:34:39.980677 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-09-17 16:34:39.980688 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-09-17 16:34:39.980699 | orchestrator | Prepare test data for container existance test -------------------------- 0.26s 2025-09-17 16:34:40.252257 | orchestrator | + osism validate ceph-mgrs 2025-09-17 16:35:08.906185 | orchestrator | 2025-09-17 16:35:08.906253 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-17 16:35:08.906264 | orchestrator | 2025-09-17 16:35:08.906271 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-17 16:35:08.906278 | orchestrator | Wednesday 17 September 2025 16:34:56 +0000 (0:00:00.390) 0:00:00.390 *** 2025-09-17 16:35:08.906284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.906291 | orchestrator | 2025-09-17 16:35:08.906297 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-17 16:35:08.906303 | orchestrator | Wednesday 17 September 2025 16:34:56 +0000 (0:00:00.543) 0:00:00.934 *** 2025-09-17 16:35:08.906310 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.906316 | orchestrator | 2025-09-17 16:35:08.906322 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-17 16:35:08.906328 | orchestrator | Wednesday 17 September 2025 16:34:57 +0000 (0:00:00.681) 0:00:01.615 *** 2025-09-17 16:35:08.906335 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906342 | orchestrator | 2025-09-17 16:35:08.906348 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-17 16:35:08.906354 | orchestrator | Wednesday 17 September 2025 16:34:57 +0000 (0:00:00.182) 0:00:01.797 *** 2025-09-17 16:35:08.906361 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906367 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:35:08.906373 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:35:08.906380 | orchestrator | 2025-09-17 16:35:08.906386 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-17 16:35:08.906392 | orchestrator | Wednesday 17 September 2025 16:34:57 +0000 (0:00:00.234) 0:00:02.032 *** 2025-09-17 16:35:08.906399 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906405 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:35:08.906422 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:35:08.906429 | orchestrator | 2025-09-17 16:35:08.906436 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-17 16:35:08.906442 | orchestrator | Wednesday 17 September 2025 16:34:58 +0000 (0:00:00.881) 0:00:02.914 *** 2025-09-17 16:35:08.906449 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906455 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:35:08.906461 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:35:08.906467 | orchestrator | 2025-09-17 16:35:08.906474 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-17 16:35:08.906480 | orchestrator | Wednesday 17 September 2025 16:34:58 +0000 (0:00:00.240) 0:00:03.154 *** 2025-09-17 16:35:08.906497 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906504 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:35:08.906510 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:35:08.906516 | orchestrator | 2025-09-17 16:35:08.906523 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:08.906529 | orchestrator | Wednesday 17 September 2025 16:34:59 +0000 (0:00:00.372) 0:00:03.527 *** 2025-09-17 16:35:08.906535 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906542 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:35:08.906548 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:35:08.906554 | orchestrator | 2025-09-17 16:35:08.906560 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-17 16:35:08.906566 | orchestrator | Wednesday 17 September 2025 16:34:59 +0000 (0:00:00.265) 0:00:03.792 *** 2025-09-17 16:35:08.906573 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906579 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:35:08.906585 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:35:08.906592 | orchestrator | 2025-09-17 16:35:08.906598 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-17 16:35:08.906604 | orchestrator | Wednesday 17 September 2025 16:34:59 +0000 (0:00:00.251) 0:00:04.044 *** 2025-09-17 16:35:08.906610 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906616 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:35:08.906623 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:35:08.906629 | orchestrator | 2025-09-17 16:35:08.906635 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:35:08.906642 | orchestrator | Wednesday 17 September 2025 16:35:00 +0000 (0:00:00.270) 0:00:04.315 *** 2025-09-17 16:35:08.906648 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906654 | orchestrator | 2025-09-17 16:35:08.906660 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:35:08.906666 | orchestrator | Wednesday 17 September 2025 16:35:00 +0000 (0:00:00.217) 0:00:04.533 *** 2025-09-17 16:35:08.906673 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906679 | orchestrator | 2025-09-17 16:35:08.906685 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:35:08.906692 | orchestrator | Wednesday 17 September 2025 16:35:00 +0000 (0:00:00.456) 0:00:04.989 *** 2025-09-17 16:35:08.906698 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906704 | orchestrator | 2025-09-17 16:35:08.906710 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.906717 | orchestrator | Wednesday 17 September 2025 16:35:00 +0000 (0:00:00.219) 0:00:05.209 *** 2025-09-17 16:35:08.906723 | orchestrator | 2025-09-17 16:35:08.906729 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.906736 | orchestrator | Wednesday 17 September 2025 16:35:00 +0000 (0:00:00.062) 0:00:05.271 *** 2025-09-17 16:35:08.906742 | orchestrator | 2025-09-17 16:35:08.906748 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.906754 | orchestrator | Wednesday 17 September 2025 16:35:01 +0000 (0:00:00.062) 0:00:05.333 *** 2025-09-17 16:35:08.906760 | orchestrator | 2025-09-17 16:35:08.906768 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:35:08.906775 | orchestrator | Wednesday 17 September 2025 16:35:01 +0000 (0:00:00.064) 0:00:05.398 *** 2025-09-17 16:35:08.906782 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906789 | orchestrator | 2025-09-17 16:35:08.906797 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-17 16:35:08.906804 | orchestrator | Wednesday 17 September 2025 16:35:01 +0000 (0:00:00.227) 0:00:05.625 *** 2025-09-17 16:35:08.906812 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906819 | orchestrator | 2025-09-17 16:35:08.906837 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-17 16:35:08.906845 | orchestrator | Wednesday 17 September 2025 16:35:01 +0000 (0:00:00.214) 0:00:05.840 *** 2025-09-17 16:35:08.906857 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906864 | orchestrator | 2025-09-17 16:35:08.906871 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-17 16:35:08.906878 | orchestrator | Wednesday 17 September 2025 16:35:01 +0000 (0:00:00.089) 0:00:05.929 *** 2025-09-17 16:35:08.906886 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:35:08.906893 | orchestrator | 2025-09-17 16:35:08.906900 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-17 16:35:08.906907 | orchestrator | Wednesday 17 September 2025 16:35:03 +0000 (0:00:01.970) 0:00:07.900 *** 2025-09-17 16:35:08.906914 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906921 | orchestrator | 2025-09-17 16:35:08.906928 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-17 16:35:08.906935 | orchestrator | Wednesday 17 September 2025 16:35:03 +0000 (0:00:00.231) 0:00:08.131 *** 2025-09-17 16:35:08.906943 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.906950 | orchestrator | 2025-09-17 16:35:08.906957 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-17 16:35:08.906964 | orchestrator | Wednesday 17 September 2025 16:35:04 +0000 (0:00:00.289) 0:00:08.421 *** 2025-09-17 16:35:08.906987 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.906994 | orchestrator | 2025-09-17 16:35:08.907002 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-17 16:35:08.907009 | orchestrator | Wednesday 17 September 2025 16:35:04 +0000 (0:00:00.294) 0:00:08.716 *** 2025-09-17 16:35:08.907016 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:35:08.907024 | orchestrator | 2025-09-17 16:35:08.907030 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-17 16:35:08.907041 | orchestrator | Wednesday 17 September 2025 16:35:04 +0000 (0:00:00.155) 0:00:08.872 *** 2025-09-17 16:35:08.907048 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.907055 | orchestrator | 2025-09-17 16:35:08.907063 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-17 16:35:08.907070 | orchestrator | Wednesday 17 September 2025 16:35:04 +0000 (0:00:00.250) 0:00:09.122 *** 2025-09-17 16:35:08.907077 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:35:08.907085 | orchestrator | 2025-09-17 16:35:08.907092 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:35:08.907099 | orchestrator | Wednesday 17 September 2025 16:35:05 +0000 (0:00:00.259) 0:00:09.382 *** 2025-09-17 16:35:08.907107 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.907114 | orchestrator | 2025-09-17 16:35:08.907121 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:35:08.907128 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:01.156) 0:00:10.538 *** 2025-09-17 16:35:08.907134 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.907140 | orchestrator | 2025-09-17 16:35:08.907146 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:35:08.907152 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:00.248) 0:00:10.787 *** 2025-09-17 16:35:08.907159 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.907165 | orchestrator | 2025-09-17 16:35:08.907171 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.907178 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:00.251) 0:00:11.038 *** 2025-09-17 16:35:08.907184 | orchestrator | 2025-09-17 16:35:08.907190 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.907196 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:00.065) 0:00:11.103 *** 2025-09-17 16:35:08.907202 | orchestrator | 2025-09-17 16:35:08.907209 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:08.907215 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:00.064) 0:00:11.168 *** 2025-09-17 16:35:08.907225 | orchestrator | 2025-09-17 16:35:08.907231 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-17 16:35:08.907238 | orchestrator | Wednesday 17 September 2025 16:35:06 +0000 (0:00:00.067) 0:00:11.236 *** 2025-09-17 16:35:08.907244 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:08.907250 | orchestrator | 2025-09-17 16:35:08.907256 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:35:08.907263 | orchestrator | Wednesday 17 September 2025 16:35:08 +0000 (0:00:01.352) 0:00:12.589 *** 2025-09-17 16:35:08.907269 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-17 16:35:08.907275 | orchestrator |  "msg": [ 2025-09-17 16:35:08.907281 | orchestrator |  "Validator run completed.", 2025-09-17 16:35:08.907288 | orchestrator |  "You can find the report file here:", 2025-09-17 16:35:08.907294 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-17T16:34:56+00:00-report.json", 2025-09-17 16:35:08.907301 | orchestrator |  "on the following host:", 2025-09-17 16:35:08.907307 | orchestrator |  "testbed-manager" 2025-09-17 16:35:08.907313 | orchestrator |  ] 2025-09-17 16:35:08.907320 | orchestrator | } 2025-09-17 16:35:08.907326 | orchestrator | 2025-09-17 16:35:08.907332 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:35:08.907339 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 16:35:08.907346 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:35:08.907357 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:35:09.069866 | orchestrator | 2025-09-17 16:35:09.069950 | orchestrator | 2025-09-17 16:35:09.069965 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:35:09.070008 | orchestrator | Wednesday 17 September 2025 16:35:08 +0000 (0:00:00.594) 0:00:13.183 *** 2025-09-17 16:35:09.070059 | orchestrator | =============================================================================== 2025-09-17 16:35:09.070071 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.97s 2025-09-17 16:35:09.070082 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2025-09-17 16:35:09.070093 | orchestrator | Aggregate test results step one ----------------------------------------- 1.16s 2025-09-17 16:35:09.070104 | orchestrator | Get container info ------------------------------------------------------ 0.88s 2025-09-17 16:35:09.070115 | orchestrator | Create report output directory ------------------------------------------ 0.68s 2025-09-17 16:35:09.070126 | orchestrator | Print report file information ------------------------------------------- 0.59s 2025-09-17 16:35:09.070136 | orchestrator | Get timestamp for report file ------------------------------------------- 0.54s 2025-09-17 16:35:09.070147 | orchestrator | Aggregate test results step two ----------------------------------------- 0.46s 2025-09-17 16:35:09.070158 | orchestrator | Set test result to passed if container is existing ---------------------- 0.37s 2025-09-17 16:35:09.070169 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.29s 2025-09-17 16:35:09.070180 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2025-09-17 16:35:09.070191 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.27s 2025-09-17 16:35:09.070202 | orchestrator | Prepare test data ------------------------------------------------------- 0.27s 2025-09-17 16:35:09.070213 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-09-17 16:35:09.070224 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.25s 2025-09-17 16:35:09.070235 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-09-17 16:35:09.070267 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2025-09-17 16:35:09.070279 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-09-17 16:35:09.070290 | orchestrator | Set test result to failed if container is missing ----------------------- 0.24s 2025-09-17 16:35:09.070301 | orchestrator | Prepare test data for container existance test -------------------------- 0.23s 2025-09-17 16:35:09.243387 | orchestrator | + osism validate ceph-osds 2025-09-17 16:35:29.128134 | orchestrator | 2025-09-17 16:35:29.128263 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-17 16:35:29.128280 | orchestrator | 2025-09-17 16:35:29.128293 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-17 16:35:29.128305 | orchestrator | Wednesday 17 September 2025 16:35:25 +0000 (0:00:00.419) 0:00:00.419 *** 2025-09-17 16:35:29.128317 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:29.128329 | orchestrator | 2025-09-17 16:35:29.128356 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-17 16:35:29.129138 | orchestrator | Wednesday 17 September 2025 16:35:25 +0000 (0:00:00.677) 0:00:01.096 *** 2025-09-17 16:35:29.129163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:29.129174 | orchestrator | 2025-09-17 16:35:29.129185 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-17 16:35:29.129197 | orchestrator | Wednesday 17 September 2025 16:35:26 +0000 (0:00:00.241) 0:00:01.337 *** 2025-09-17 16:35:29.129208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:29.129219 | orchestrator | 2025-09-17 16:35:29.129229 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-17 16:35:29.129240 | orchestrator | Wednesday 17 September 2025 16:35:26 +0000 (0:00:00.913) 0:00:02.251 *** 2025-09-17 16:35:29.129251 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:29.129263 | orchestrator | 2025-09-17 16:35:29.129274 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-17 16:35:29.129284 | orchestrator | Wednesday 17 September 2025 16:35:27 +0000 (0:00:00.110) 0:00:02.362 *** 2025-09-17 16:35:29.129295 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:29.129306 | orchestrator | 2025-09-17 16:35:29.129317 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-17 16:35:29.129328 | orchestrator | Wednesday 17 September 2025 16:35:27 +0000 (0:00:00.128) 0:00:02.491 *** 2025-09-17 16:35:29.129338 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:29.129349 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:29.129360 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:29.129370 | orchestrator | 2025-09-17 16:35:29.129381 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-17 16:35:29.129392 | orchestrator | Wednesday 17 September 2025 16:35:27 +0000 (0:00:00.294) 0:00:02.785 *** 2025-09-17 16:35:29.129403 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:29.129414 | orchestrator | 2025-09-17 16:35:29.129424 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-17 16:35:29.129435 | orchestrator | Wednesday 17 September 2025 16:35:27 +0000 (0:00:00.161) 0:00:02.946 *** 2025-09-17 16:35:29.129445 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:29.129456 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:29.129467 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:29.129479 | orchestrator | 2025-09-17 16:35:29.129490 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-17 16:35:29.129501 | orchestrator | Wednesday 17 September 2025 16:35:27 +0000 (0:00:00.300) 0:00:03.246 *** 2025-09-17 16:35:29.129512 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:29.129522 | orchestrator | 2025-09-17 16:35:29.129533 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:29.129544 | orchestrator | Wednesday 17 September 2025 16:35:28 +0000 (0:00:00.490) 0:00:03.737 *** 2025-09-17 16:35:29.129554 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:29.129592 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:29.129603 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:29.129621 | orchestrator | 2025-09-17 16:35:29.129642 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-17 16:35:29.129660 | orchestrator | Wednesday 17 September 2025 16:35:28 +0000 (0:00:00.431) 0:00:04.168 *** 2025-09-17 16:35:29.129702 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'be7a426eb89dc635a2a691a5dc06e2beea6e677c93407d6bbfbe0c0d9d005999', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.129727 | orchestrator | skipping: [testbed-node-3] => (item={'id': '96fba78910aafaad95b0aeef49c623861aef605225c644eb67686d2a3e474b9f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.129749 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13e7104a42e7e94352bfa3b999dff0808b65e0ea1433cd19b2de8d2e7d5afb0d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-17 16:35:29.129776 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f04c21d99217b08c07f5d0f5b971e35a4677f71a5a1f366c63586a2d09d1c83b', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-17 16:35:29.129789 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5c933feadc451f00c8ae13511abf777ef124d6d74dc01e255675a8838846e41', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-17 16:35:29.129821 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bdbbf749a6476e04d1121d7f208a1f9c209a37367a81d5ac9c289e9b2df29844', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-17 16:35:29.129844 | orchestrator | skipping: [testbed-node-3] => (item={'id': '753ed24106f44f88c178d33fb7f28081947a9182013d012fdd33b8de5da847a7', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.129855 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3b4e1d99b135f396b7b6ded3cd0f757a6316a7fae787d5e90ec1e84b7cf430d', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-17 16:35:29.129868 | orchestrator | skipping: [testbed-node-3] => (item={'id': '85a3bd1c639fbd6f32579387e7bffa6b9248c95d2de33845b7a7321be473d8f4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.129879 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ee7f3ba71c2a2fb6ddd2e8b3b97b6fe22d1d3c0ead48f0be4f28a68b68f937f6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-17 16:35:29.129891 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2906ce9eea720ea820d6dde66b11691bbe6f2b5f6eae97e61c921ace034d646f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-17 16:35:29.129902 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7e5960f6363afcfe46e9ea992b94f682b679f391181ed91025cd55e5e035cb09', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-17 16:35:29.129924 | orchestrator | ok: [testbed-node-3] => (item={'id': '5204e5b0cf0f6fc9bcb8fd90491f53f332912ae5cfc658f98c9cd1610cb15ccf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:29.129937 | orchestrator | ok: [testbed-node-3] => (item={'id': '5af72550896ff8a70dee7633f6207b1b4f1e2cf6d8e3bff1ea4526abd34cd2a3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:29.129948 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65739c349c1af1705af4cad287ddd1f7d98d6bc6d1046bf0d050d11294c2cf69', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-17 16:35:29.129959 | orchestrator | skipping: [testbed-node-3] => (item={'id': '291b8b40ba1d4ce99fde5d21d35a31287950c248055151ef732adf246ad56cc2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-17 16:35:29.129971 | orchestrator | skipping: [testbed-node-3] => (item={'id': '98317fc7815873a355830cf4e7a459bfd47d7105753cfe9d57ca06a2396fd494', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-17 16:35:29.130011 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b40ae3d813be860bd04b783dd0ee910d4f053f3458bff0fffe0ad9a512e27c7d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:29.130091 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6414b727a4b396f21e9aa8c1984aa4cc82f223a1cc8e5ec3248dbf9113c2a86f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:29.130111 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77d8302bdfcddbe742f0d8d76430f3503bddd9c3b4d5ce4f26d85b5b4774ffc7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-17 16:35:29.130143 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ff8b15a76f795c38254bc06b57e83fe84fe7ebecd761e992f32ef61d8c7307f4', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.429053 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18fc212dbb04ad64e7b78eb5fa49e74c4be62a8679a46baf9d145ea3a7a13db9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.429151 | orchestrator | skipping: [testbed-node-4] => (item={'id': '87be09a47823e260248eb73dd82f6c77bdacb9e4ca1863ecf6cf1a5f5f658467', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-17 16:35:29.429167 | orchestrator | skipping: [testbed-node-4] => (item={'id': '10cb976b059cf64738b0843dde3bab03b921ce8cefd9b5dea57af42486a523f5', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-17 16:35:29.429180 | orchestrator | skipping: [testbed-node-4] => (item={'id': '56510fed79867c04e2818d862b15e1c83cd7ca6636d156c987a25fe89a15ba40', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-17 16:35:29.429191 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'be03103dcb4678585c366e58557c17219dfe4b295903907626e873b0b831b9fd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-17 16:35:29.429228 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc856598db06819e5e17ab46e84ab7e47f5da6d3df7f8c5a721f64b05bde361a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.429240 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6dc28ae6bfb515dea54d94fc5e651203f4cdfcf21c72d3dcbb2b61bdefdb770c', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-17 16:35:29.429253 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ed12fe810b03ad4532dcbc959e31016fe4d1b90dbe18a506fc8934db54ce5340', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.429264 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97e706574c8e62f5f6595ca492c9648fbbdac31c5de1ee8dc9bf35ecb14169b2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-17 16:35:29.429277 | orchestrator | skipping: [testbed-node-4] => (item={'id': '379b2b094047c674cb834f1bf3d9ee65ca997473b067fc1262c7430d098dd058', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-17 16:35:29.429289 | orchestrator | skipping: [testbed-node-4] => (item={'id': '287f2ebedeab9a65b8965c1ec3527167fed9f9e9ebe2d05013373a542b71e385', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-17 16:35:29.429302 | orchestrator | ok: [testbed-node-4] => (item={'id': '6e7ef77f34773f24f8452fa57d7890c1d038acad4914223de9ff0a40e9366dcb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:29.429316 | orchestrator | ok: [testbed-node-4] => (item={'id': '74f5a1971eb4adc891d169ea32fdf1c04f793e57acea9ab0bb32b4e123cb0986', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:29.429335 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb12e532e1a0b21059a79738f7720aef56e17d30d871a4577d6a088008f7f595', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-17 16:35:29.429398 | orchestrator | skipping: [testbed-node-4] => (item={'id': '034d3d6c2b3579a405a4a2891e136ee30146b68976418c95c51cf166dca43b6b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-17 16:35:29.429420 | orchestrator | skipping: [testbed-node-4] => (item={'id': '718da09f2a77ce2ddea9e1977d1970906535db8a544c9ec11f347b23c6962f6f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-17 16:35:29.429432 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8692aaa2b1ad9a9f1ef071233a09baac90703cb4c74b804e6d1100bfc9709dd5', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:29.429443 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cc92156a7813c9ade46aee13847248480a93f0f9fb9afc5f1bb984065b16869b', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:29.429464 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f2954b572409f3431c13e99a618df179f535053954564114701792411d276f0', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-17 16:35:29.429475 | orchestrator | skipping: [testbed-node-5] => (item={'id': '172af00443a82f834a67e4cb7430e8c0215661f68061b1ab343e5105dcb21dd5', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.429487 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c183ebba1315ecf2eded448e30d6cc86584c5663045aa15af98d10e98e738c2a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-17 16:35:29.429498 | orchestrator | skipping: [testbed-node-5] => (item={'id': '08db36951ce23d1ead016f8c92dcf4b065bf2cbc8d50d76f21ed32469ca48b0e', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-17 16:35:29.429509 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'afb0d61d6eef90d16b09e64c8442b82a2fbb91b842d87e502cb6a8bf8953338a', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-17 16:35:29.429520 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e99820012f75b25f972ef3370c2727ef862adb2ba7da4083d20330140a672cb9', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-17 16:35:29.429531 | orchestrator | skipping: [testbed-node-5] => (item={'id': '22e86cbb6dfa138d2b7cb7c173589c5f20373a185d938312cd365951d03ad541', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-17 16:35:29.429542 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1448301e2545628795ae016a28ab392edb822f9640e01d2defefd1d46b91e782', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.429572 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bceeb6f837d98d979058f4710896b896fa7224b981bbc4127915d3748d796651', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-17 16:35:29.429585 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4998181b369b165ba5f69dc8767d3201dbe2c756547b884d0a7c7056a3092f3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-17 16:35:29.429598 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c805390410a09bdd8ed669990cbf1c0711a257bbf6c10da1e30f0f57f59f29b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-17 16:35:29.429619 | orchestrator | skipping: [testbed-node-5] => (item={'id': '52e1399d8ed7f9cdcd705db3398e997c5fd885f8b496b2d9bfa83f81b35ebe3a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-17 16:35:36.234576 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf402a330fca50a31c4b545f0e094c512c2f75fb0fbdb34c514dbc099608f60d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-17 16:35:36.234686 | orchestrator | ok: [testbed-node-5] => (item={'id': '68940244cf362e110d04cdaf56ea34d57c00f62dac85b0080edf87b8e32b59f9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:36.234728 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f3664824deb2961fc21648bd06541d5a43f72affa153f7a9d45d9ffc3198a287', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-17 16:35:36.234741 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f8e54120198d563c741cf37ea14a19acbfb5c09b67e36b4219a918b14a3e9601', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-09-17 16:35:36.234754 | orchestrator | skipping: [testbed-node-5] => (item={'id': '64bc6f7351806283fc0d9742c11db1ee257a03f217b24c11dcb3850d55c642e6', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-17 16:35:36.234768 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c1a5dd87241f5e49b510cf5566a35ac028b55d037900265a9bb2e6dbb17300c9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-17 16:35:36.234779 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e8dc67a70b68a3854921f0e3fef22bc1b4ce28546745cf061f78343a6a57562', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:36.234790 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f69360438178df7124701e7ed0b2c5f0d2aec545d036080a4d96e0e281734484', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-17 16:35:36.234801 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23f273876bf4162ea96343db69d403deb9884a00f3f0424e9d782063802368cd', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-17 16:35:36.234812 | orchestrator | 2025-09-17 16:35:36.234825 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-17 16:35:36.234837 | orchestrator | Wednesday 17 September 2025 16:35:29 +0000 (0:00:00.517) 0:00:04.685 *** 2025-09-17 16:35:36.234848 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.234860 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.234871 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.234882 | orchestrator | 2025-09-17 16:35:36.234893 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-17 16:35:36.234904 | orchestrator | Wednesday 17 September 2025 16:35:29 +0000 (0:00:00.292) 0:00:04.978 *** 2025-09-17 16:35:36.234915 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.234926 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:36.234938 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:36.234949 | orchestrator | 2025-09-17 16:35:36.234960 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-17 16:35:36.234971 | orchestrator | Wednesday 17 September 2025 16:35:29 +0000 (0:00:00.261) 0:00:05.239 *** 2025-09-17 16:35:36.235040 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.235067 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.235079 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.235090 | orchestrator | 2025-09-17 16:35:36.235101 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:36.235112 | orchestrator | Wednesday 17 September 2025 16:35:30 +0000 (0:00:00.475) 0:00:05.715 *** 2025-09-17 16:35:36.235123 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.235134 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.235145 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.235163 | orchestrator | 2025-09-17 16:35:36.235174 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-17 16:35:36.235186 | orchestrator | Wednesday 17 September 2025 16:35:30 +0000 (0:00:00.277) 0:00:05.992 *** 2025-09-17 16:35:36.235197 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-17 16:35:36.235209 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-17 16:35:36.235220 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235231 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-17 16:35:36.235242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-17 16:35:36.235271 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:36.235283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-17 16:35:36.235294 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-17 16:35:36.235305 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:36.235316 | orchestrator | 2025-09-17 16:35:36.235326 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-17 16:35:36.235337 | orchestrator | Wednesday 17 September 2025 16:35:30 +0000 (0:00:00.277) 0:00:06.270 *** 2025-09-17 16:35:36.235348 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.235359 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.235370 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.235381 | orchestrator | 2025-09-17 16:35:36.235391 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-17 16:35:36.235402 | orchestrator | Wednesday 17 September 2025 16:35:31 +0000 (0:00:00.287) 0:00:06.558 *** 2025-09-17 16:35:36.235413 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235424 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:36.235435 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:36.235446 | orchestrator | 2025-09-17 16:35:36.235457 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-17 16:35:36.235468 | orchestrator | Wednesday 17 September 2025 16:35:31 +0000 (0:00:00.425) 0:00:06.983 *** 2025-09-17 16:35:36.235479 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235489 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:36.235500 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:36.235511 | orchestrator | 2025-09-17 16:35:36.235522 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-17 16:35:36.235532 | orchestrator | Wednesday 17 September 2025 16:35:32 +0000 (0:00:00.315) 0:00:07.298 *** 2025-09-17 16:35:36.235543 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.235554 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.235565 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.235575 | orchestrator | 2025-09-17 16:35:36.235586 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:35:36.235597 | orchestrator | Wednesday 17 September 2025 16:35:32 +0000 (0:00:00.268) 0:00:07.566 *** 2025-09-17 16:35:36.235608 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235619 | orchestrator | 2025-09-17 16:35:36.235630 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:35:36.235640 | orchestrator | Wednesday 17 September 2025 16:35:32 +0000 (0:00:00.229) 0:00:07.796 *** 2025-09-17 16:35:36.235651 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235662 | orchestrator | 2025-09-17 16:35:36.235673 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:35:36.235683 | orchestrator | Wednesday 17 September 2025 16:35:32 +0000 (0:00:00.231) 0:00:08.028 *** 2025-09-17 16:35:36.235694 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235705 | orchestrator | 2025-09-17 16:35:36.235716 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:36.235733 | orchestrator | Wednesday 17 September 2025 16:35:32 +0000 (0:00:00.228) 0:00:08.256 *** 2025-09-17 16:35:36.235744 | orchestrator | 2025-09-17 16:35:36.235754 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:36.235765 | orchestrator | Wednesday 17 September 2025 16:35:33 +0000 (0:00:00.063) 0:00:08.319 *** 2025-09-17 16:35:36.235776 | orchestrator | 2025-09-17 16:35:36.235786 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:36.235797 | orchestrator | Wednesday 17 September 2025 16:35:33 +0000 (0:00:00.059) 0:00:08.379 *** 2025-09-17 16:35:36.235808 | orchestrator | 2025-09-17 16:35:36.235819 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:35:36.235830 | orchestrator | Wednesday 17 September 2025 16:35:33 +0000 (0:00:00.213) 0:00:08.593 *** 2025-09-17 16:35:36.235841 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235852 | orchestrator | 2025-09-17 16:35:36.235862 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-17 16:35:36.235874 | orchestrator | Wednesday 17 September 2025 16:35:33 +0000 (0:00:00.239) 0:00:08.832 *** 2025-09-17 16:35:36.235884 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:36.235895 | orchestrator | 2025-09-17 16:35:36.235906 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:36.235917 | orchestrator | Wednesday 17 September 2025 16:35:33 +0000 (0:00:00.234) 0:00:09.067 *** 2025-09-17 16:35:36.235928 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.235939 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:36.235950 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:36.235960 | orchestrator | 2025-09-17 16:35:36.236003 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-17 16:35:36.236016 | orchestrator | Wednesday 17 September 2025 16:35:34 +0000 (0:00:00.269) 0:00:09.337 *** 2025-09-17 16:35:36.236027 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.236038 | orchestrator | 2025-09-17 16:35:36.236049 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-17 16:35:36.236060 | orchestrator | Wednesday 17 September 2025 16:35:34 +0000 (0:00:00.210) 0:00:09.547 *** 2025-09-17 16:35:36.236071 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-17 16:35:36.236081 | orchestrator | 2025-09-17 16:35:36.236092 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-17 16:35:36.236103 | orchestrator | Wednesday 17 September 2025 16:35:35 +0000 (0:00:01.455) 0:00:11.003 *** 2025-09-17 16:35:36.236114 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.236125 | orchestrator | 2025-09-17 16:35:36.236135 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-17 16:35:36.236146 | orchestrator | Wednesday 17 September 2025 16:35:35 +0000 (0:00:00.124) 0:00:11.128 *** 2025-09-17 16:35:36.236156 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:36.236167 | orchestrator | 2025-09-17 16:35:36.236178 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-17 16:35:36.236189 | orchestrator | Wednesday 17 September 2025 16:35:36 +0000 (0:00:00.270) 0:00:11.399 *** 2025-09-17 16:35:36.236205 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.079766 | orchestrator | 2025-09-17 16:35:48.079878 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-17 16:35:48.079895 | orchestrator | Wednesday 17 September 2025 16:35:36 +0000 (0:00:00.104) 0:00:11.504 *** 2025-09-17 16:35:48.079907 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.079920 | orchestrator | 2025-09-17 16:35:48.079931 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:48.079942 | orchestrator | Wednesday 17 September 2025 16:35:36 +0000 (0:00:00.109) 0:00:11.613 *** 2025-09-17 16:35:48.079954 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.079965 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.079976 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080038 | orchestrator | 2025-09-17 16:35:48.080050 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-17 16:35:48.080087 | orchestrator | Wednesday 17 September 2025 16:35:36 +0000 (0:00:00.427) 0:00:12.041 *** 2025-09-17 16:35:48.080098 | orchestrator | changed: [testbed-node-3] 2025-09-17 16:35:48.080110 | orchestrator | changed: [testbed-node-4] 2025-09-17 16:35:48.080121 | orchestrator | changed: [testbed-node-5] 2025-09-17 16:35:48.080132 | orchestrator | 2025-09-17 16:35:48.080143 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-17 16:35:48.080154 | orchestrator | Wednesday 17 September 2025 16:35:39 +0000 (0:00:02.267) 0:00:14.309 *** 2025-09-17 16:35:48.080165 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080176 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080187 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080197 | orchestrator | 2025-09-17 16:35:48.080208 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-17 16:35:48.080219 | orchestrator | Wednesday 17 September 2025 16:35:39 +0000 (0:00:00.294) 0:00:14.603 *** 2025-09-17 16:35:48.080229 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080240 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080251 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080261 | orchestrator | 2025-09-17 16:35:48.080272 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-17 16:35:48.080283 | orchestrator | Wednesday 17 September 2025 16:35:39 +0000 (0:00:00.457) 0:00:15.060 *** 2025-09-17 16:35:48.080294 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.080306 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:48.080318 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:48.080331 | orchestrator | 2025-09-17 16:35:48.080343 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-17 16:35:48.080355 | orchestrator | Wednesday 17 September 2025 16:35:40 +0000 (0:00:00.441) 0:00:15.502 *** 2025-09-17 16:35:48.080367 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080379 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080391 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080403 | orchestrator | 2025-09-17 16:35:48.080415 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-17 16:35:48.080427 | orchestrator | Wednesday 17 September 2025 16:35:40 +0000 (0:00:00.311) 0:00:15.814 *** 2025-09-17 16:35:48.080439 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.080451 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:48.080463 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:48.080475 | orchestrator | 2025-09-17 16:35:48.080487 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-17 16:35:48.080499 | orchestrator | Wednesday 17 September 2025 16:35:40 +0000 (0:00:00.256) 0:00:16.071 *** 2025-09-17 16:35:48.080511 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.080523 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:48.080535 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:48.080547 | orchestrator | 2025-09-17 16:35:48.080559 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-17 16:35:48.080571 | orchestrator | Wednesday 17 September 2025 16:35:41 +0000 (0:00:00.261) 0:00:16.332 *** 2025-09-17 16:35:48.080583 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080595 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080607 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080619 | orchestrator | 2025-09-17 16:35:48.080632 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-17 16:35:48.080644 | orchestrator | Wednesday 17 September 2025 16:35:41 +0000 (0:00:00.648) 0:00:16.980 *** 2025-09-17 16:35:48.080656 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080667 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080678 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080689 | orchestrator | 2025-09-17 16:35:48.080700 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-17 16:35:48.080711 | orchestrator | Wednesday 17 September 2025 16:35:42 +0000 (0:00:00.462) 0:00:17.443 *** 2025-09-17 16:35:48.080730 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080741 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080752 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080762 | orchestrator | 2025-09-17 16:35:48.080773 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-17 16:35:48.080784 | orchestrator | Wednesday 17 September 2025 16:35:42 +0000 (0:00:00.276) 0:00:17.719 *** 2025-09-17 16:35:48.080794 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.080805 | orchestrator | skipping: [testbed-node-4] 2025-09-17 16:35:48.080816 | orchestrator | skipping: [testbed-node-5] 2025-09-17 16:35:48.080827 | orchestrator | 2025-09-17 16:35:48.080838 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-17 16:35:48.080849 | orchestrator | Wednesday 17 September 2025 16:35:42 +0000 (0:00:00.279) 0:00:17.998 *** 2025-09-17 16:35:48.080859 | orchestrator | ok: [testbed-node-3] 2025-09-17 16:35:48.080870 | orchestrator | ok: [testbed-node-4] 2025-09-17 16:35:48.080881 | orchestrator | ok: [testbed-node-5] 2025-09-17 16:35:48.080891 | orchestrator | 2025-09-17 16:35:48.080902 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-17 16:35:48.080913 | orchestrator | Wednesday 17 September 2025 16:35:43 +0000 (0:00:00.479) 0:00:18.478 *** 2025-09-17 16:35:48.080924 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:48.080935 | orchestrator | 2025-09-17 16:35:48.080946 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-17 16:35:48.080957 | orchestrator | Wednesday 17 September 2025 16:35:43 +0000 (0:00:00.238) 0:00:18.717 *** 2025-09-17 16:35:48.080968 | orchestrator | skipping: [testbed-node-3] 2025-09-17 16:35:48.080978 | orchestrator | 2025-09-17 16:35:48.081027 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-17 16:35:48.081039 | orchestrator | Wednesday 17 September 2025 16:35:43 +0000 (0:00:00.235) 0:00:18.952 *** 2025-09-17 16:35:48.081050 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:48.081061 | orchestrator | 2025-09-17 16:35:48.081072 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-17 16:35:48.081082 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:01.493) 0:00:20.445 *** 2025-09-17 16:35:48.081093 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:48.081104 | orchestrator | 2025-09-17 16:35:48.081115 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-17 16:35:48.081125 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:00.238) 0:00:20.684 *** 2025-09-17 16:35:48.081136 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:48.081147 | orchestrator | 2025-09-17 16:35:48.081158 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:48.081168 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:00.262) 0:00:20.947 *** 2025-09-17 16:35:48.081179 | orchestrator | 2025-09-17 16:35:48.081190 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:48.081200 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:00.063) 0:00:21.011 *** 2025-09-17 16:35:48.081211 | orchestrator | 2025-09-17 16:35:48.081222 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-17 16:35:48.081232 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:00.063) 0:00:21.074 *** 2025-09-17 16:35:48.081243 | orchestrator | 2025-09-17 16:35:48.081254 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-17 16:35:48.081265 | orchestrator | Wednesday 17 September 2025 16:35:45 +0000 (0:00:00.067) 0:00:21.141 *** 2025-09-17 16:35:48.081321 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-17 16:35:48.081333 | orchestrator | 2025-09-17 16:35:48.081344 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-17 16:35:48.081355 | orchestrator | Wednesday 17 September 2025 16:35:47 +0000 (0:00:01.450) 0:00:22.592 *** 2025-09-17 16:35:48.081373 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-17 16:35:48.081385 | orchestrator |  "msg": [ 2025-09-17 16:35:48.081395 | orchestrator |  "Validator run completed.", 2025-09-17 16:35:48.081406 | orchestrator |  "You can find the report file here:", 2025-09-17 16:35:48.081417 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-17T16:35:25+00:00-report.json", 2025-09-17 16:35:48.081428 | orchestrator |  "on the following host:", 2025-09-17 16:35:48.081439 | orchestrator |  "testbed-manager" 2025-09-17 16:35:48.081450 | orchestrator |  ] 2025-09-17 16:35:48.081461 | orchestrator | } 2025-09-17 16:35:48.081472 | orchestrator | 2025-09-17 16:35:48.081483 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:35:48.081494 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-17 16:35:48.081506 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 16:35:48.081517 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-17 16:35:48.081527 | orchestrator | 2025-09-17 16:35:48.081538 | orchestrator | 2025-09-17 16:35:48.081549 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:35:48.081559 | orchestrator | Wednesday 17 September 2025 16:35:48 +0000 (0:00:00.741) 0:00:23.334 *** 2025-09-17 16:35:48.081570 | orchestrator | =============================================================================== 2025-09-17 16:35:48.081581 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.27s 2025-09-17 16:35:48.081591 | orchestrator | Aggregate test results step one ----------------------------------------- 1.49s 2025-09-17 16:35:48.081602 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.46s 2025-09-17 16:35:48.081613 | orchestrator | Write report file ------------------------------------------------------- 1.45s 2025-09-17 16:35:48.081628 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-09-17 16:35:48.081639 | orchestrator | Print report file information ------------------------------------------- 0.74s 2025-09-17 16:35:48.081649 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-09-17 16:35:48.081660 | orchestrator | Prepare test data ------------------------------------------------------- 0.65s 2025-09-17 16:35:48.081670 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2025-09-17 16:35:48.081681 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.49s 2025-09-17 16:35:48.081691 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2025-09-17 16:35:48.081702 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2025-09-17 16:35:48.081713 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.46s 2025-09-17 16:35:48.081724 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.46s 2025-09-17 16:35:48.081735 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.44s 2025-09-17 16:35:48.081745 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2025-09-17 16:35:48.081762 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2025-09-17 16:35:48.306529 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.43s 2025-09-17 16:35:48.306595 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2025-09-17 16:35:48.306606 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.32s 2025-09-17 16:35:48.569038 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-17 16:35:48.579760 | orchestrator | + set -e 2025-09-17 16:35:48.579803 | orchestrator | + source /opt/manager-vars.sh 2025-09-17 16:35:48.579815 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-17 16:35:48.579827 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-17 16:35:48.579838 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-17 16:35:48.579849 | orchestrator | ++ CEPH_VERSION=reef 2025-09-17 16:35:48.579860 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-17 16:35:48.579873 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-17 16:35:48.579884 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 16:35:48.579895 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 16:35:48.579906 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-17 16:35:48.579918 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-17 16:35:48.579929 | orchestrator | ++ export ARA=false 2025-09-17 16:35:48.579940 | orchestrator | ++ ARA=false 2025-09-17 16:35:48.579952 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-17 16:35:48.579963 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-17 16:35:48.579974 | orchestrator | ++ export TEMPEST=false 2025-09-17 16:35:48.580025 | orchestrator | ++ TEMPEST=false 2025-09-17 16:35:48.580037 | orchestrator | ++ export IS_ZUUL=true 2025-09-17 16:35:48.580047 | orchestrator | ++ IS_ZUUL=true 2025-09-17 16:35:48.580059 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 16:35:48.580070 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.205 2025-09-17 16:35:48.580080 | orchestrator | ++ export EXTERNAL_API=false 2025-09-17 16:35:48.580091 | orchestrator | ++ EXTERNAL_API=false 2025-09-17 16:35:48.580102 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-17 16:35:48.580113 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-17 16:35:48.580123 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-17 16:35:48.580134 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-17 16:35:48.580145 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-17 16:35:48.580156 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-17 16:35:48.580166 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-17 16:35:48.580177 | orchestrator | + source /etc/os-release 2025-09-17 16:35:48.580188 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-17 16:35:48.580199 | orchestrator | ++ NAME=Ubuntu 2025-09-17 16:35:48.580209 | orchestrator | ++ VERSION_ID=24.04 2025-09-17 16:35:48.580220 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-17 16:35:48.580231 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-17 16:35:48.580242 | orchestrator | ++ ID=ubuntu 2025-09-17 16:35:48.580253 | orchestrator | ++ ID_LIKE=debian 2025-09-17 16:35:48.580263 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-17 16:35:48.580274 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-17 16:35:48.580285 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-17 16:35:48.580297 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-17 16:35:48.580309 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-17 16:35:48.580320 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-17 16:35:48.580330 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-17 16:35:48.580342 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-17 16:35:48.580354 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-17 16:35:48.593203 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-17 16:36:07.847054 | orchestrator | 2025-09-17 16:36:07.847135 | orchestrator | # Status of Elasticsearch 2025-09-17 16:36:07.847143 | orchestrator | 2025-09-17 16:36:07.847150 | orchestrator | + pushd /opt/configuration/contrib 2025-09-17 16:36:07.847157 | orchestrator | + echo 2025-09-17 16:36:07.847163 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-17 16:36:07.847169 | orchestrator | + echo 2025-09-17 16:36:07.847175 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-17 16:36:08.019580 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-17 16:36:08.019912 | orchestrator | 2025-09-17 16:36:08.019941 | orchestrator | # Status of MariaDB 2025-09-17 16:36:08.019954 | orchestrator | 2025-09-17 16:36:08.019966 | orchestrator | + echo 2025-09-17 16:36:08.019977 | orchestrator | + echo '# Status of MariaDB' 2025-09-17 16:36:08.020073 | orchestrator | + echo 2025-09-17 16:36:08.020088 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-17 16:36:08.020100 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-17 16:36:08.061109 | orchestrator | Reading package lists... 2025-09-17 16:36:08.354295 | orchestrator | Building dependency tree... 2025-09-17 16:36:08.355175 | orchestrator | Reading state information... 2025-09-17 16:36:08.717126 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-17 16:36:08.717223 | orchestrator | bc set to manually installed. 2025-09-17 16:36:08.717238 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-17 16:36:09.337357 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-17 16:36:09.338507 | orchestrator | 2025-09-17 16:36:09.338538 | orchestrator | # Status of Prometheus 2025-09-17 16:36:09.338552 | orchestrator | 2025-09-17 16:36:09.338564 | orchestrator | + echo 2025-09-17 16:36:09.338575 | orchestrator | + echo '# Status of Prometheus' 2025-09-17 16:36:09.338586 | orchestrator | + echo 2025-09-17 16:36:09.338617 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-17 16:36:09.386666 | orchestrator | Unauthorized 2025-09-17 16:36:09.389119 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-17 16:36:09.435905 | orchestrator | Unauthorized 2025-09-17 16:36:09.438183 | orchestrator | 2025-09-17 16:36:09.438211 | orchestrator | # Status of RabbitMQ 2025-09-17 16:36:09.438224 | orchestrator | 2025-09-17 16:36:09.438235 | orchestrator | + echo 2025-09-17 16:36:09.438247 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-17 16:36:09.438258 | orchestrator | + echo 2025-09-17 16:36:09.438270 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-17 16:36:09.869629 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-17 16:36:09.877701 | orchestrator | 2025-09-17 16:36:09.877740 | orchestrator | # Status of Redis 2025-09-17 16:36:09.877754 | orchestrator | 2025-09-17 16:36:09.877766 | orchestrator | + echo 2025-09-17 16:36:09.877778 | orchestrator | + echo '# Status of Redis' 2025-09-17 16:36:09.877790 | orchestrator | + echo 2025-09-17 16:36:09.877804 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-17 16:36:09.885051 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001766s;;;0.000000;10.000000 2025-09-17 16:36:09.885670 | orchestrator | + popd 2025-09-17 16:36:09.885866 | orchestrator | 2025-09-17 16:36:09.885883 | orchestrator | + echo 2025-09-17 16:36:09.885894 | orchestrator | # Create backup of MariaDB database 2025-09-17 16:36:09.885906 | orchestrator | 2025-09-17 16:36:09.885918 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-17 16:36:09.885929 | orchestrator | + echo 2025-09-17 16:36:09.885940 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-17 16:36:11.712112 | orchestrator | 2025-09-17 16:36:11 | INFO  | Task 9baf07c4-2d72-4e22-bc16-931b41c77b60 (mariadb_backup) was prepared for execution. 2025-09-17 16:36:11.712187 | orchestrator | 2025-09-17 16:36:11 | INFO  | It takes a moment until task 9baf07c4-2d72-4e22-bc16-931b41c77b60 (mariadb_backup) has been started and output is visible here. 2025-09-17 16:38:49.837159 | orchestrator | 2025-09-17 16:38:49.837344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-17 16:38:49.837365 | orchestrator | 2025-09-17 16:38:49.837377 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-17 16:38:49.837389 | orchestrator | Wednesday 17 September 2025 16:36:15 +0000 (0:00:00.173) 0:00:00.173 *** 2025-09-17 16:38:49.837400 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:38:49.837412 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:38:49.837422 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:38:49.837433 | orchestrator | 2025-09-17 16:38:49.837445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-17 16:38:49.837455 | orchestrator | Wednesday 17 September 2025 16:36:15 +0000 (0:00:00.317) 0:00:00.490 *** 2025-09-17 16:38:49.837466 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-17 16:38:49.837478 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-17 16:38:49.837511 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-17 16:38:49.837523 | orchestrator | 2025-09-17 16:38:49.837534 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-17 16:38:49.837544 | orchestrator | 2025-09-17 16:38:49.837556 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-17 16:38:49.837567 | orchestrator | Wednesday 17 September 2025 16:36:16 +0000 (0:00:00.526) 0:00:01.017 *** 2025-09-17 16:38:49.837577 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-17 16:38:49.837588 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-17 16:38:49.837599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-17 16:38:49.837609 | orchestrator | 2025-09-17 16:38:49.837620 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-17 16:38:49.837631 | orchestrator | Wednesday 17 September 2025 16:36:16 +0000 (0:00:00.371) 0:00:01.389 *** 2025-09-17 16:38:49.837642 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-17 16:38:49.837653 | orchestrator | 2025-09-17 16:38:49.837664 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-17 16:38:49.837675 | orchestrator | Wednesday 17 September 2025 16:36:17 +0000 (0:00:00.498) 0:00:01.887 *** 2025-09-17 16:38:49.837685 | orchestrator | ok: [testbed-node-0] 2025-09-17 16:38:49.837696 | orchestrator | ok: [testbed-node-1] 2025-09-17 16:38:49.837706 | orchestrator | ok: [testbed-node-2] 2025-09-17 16:38:49.837717 | orchestrator | 2025-09-17 16:38:49.837728 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-17 16:38:49.837739 | orchestrator | Wednesday 17 September 2025 16:36:19 +0000 (0:00:02.717) 0:00:04.605 *** 2025-09-17 16:38:49.837749 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:38:49.837761 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:38:49.837771 | orchestrator | 2025-09-17 16:38:49.837782 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-09-17 16:38:49.837792 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-17 16:38:49.837803 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-17 16:38:49.837814 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-17 16:38:49.837825 | orchestrator | mariadb_bootstrap_restart 2025-09-17 16:38:49.837835 | orchestrator | changed: [testbed-node-0] 2025-09-17 16:38:49.837846 | orchestrator | 2025-09-17 16:38:49.837857 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-17 16:38:49.837867 | orchestrator | skipping: no hosts matched 2025-09-17 16:38:49.837878 | orchestrator | 2025-09-17 16:38:49.837889 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-17 16:38:49.837900 | orchestrator | skipping: no hosts matched 2025-09-17 16:38:49.837911 | orchestrator | 2025-09-17 16:38:49.837922 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-17 16:38:49.837933 | orchestrator | skipping: no hosts matched 2025-09-17 16:38:49.837944 | orchestrator | 2025-09-17 16:38:49.837955 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-17 16:38:49.837965 | orchestrator | 2025-09-17 16:38:49.837976 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-17 16:38:49.838002 | orchestrator | Wednesday 17 September 2025 16:38:48 +0000 (0:02:29.137) 0:02:33.743 *** 2025-09-17 16:38:49.838013 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:38:49.838092 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:38:49.838103 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:38:49.838114 | orchestrator | 2025-09-17 16:38:49.838124 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-17 16:38:49.838135 | orchestrator | Wednesday 17 September 2025 16:38:49 +0000 (0:00:00.294) 0:02:34.037 *** 2025-09-17 16:38:49.838146 | orchestrator | skipping: [testbed-node-0] 2025-09-17 16:38:49.838165 | orchestrator | skipping: [testbed-node-1] 2025-09-17 16:38:49.838176 | orchestrator | skipping: [testbed-node-2] 2025-09-17 16:38:49.838186 | orchestrator | 2025-09-17 16:38:49.838197 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:38:49.838209 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-17 16:38:49.838244 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:38:49.838256 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-17 16:38:49.838267 | orchestrator | 2025-09-17 16:38:49.838277 | orchestrator | 2025-09-17 16:38:49.838288 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:38:49.838299 | orchestrator | Wednesday 17 September 2025 16:38:49 +0000 (0:00:00.211) 0:02:34.248 *** 2025-09-17 16:38:49.838310 | orchestrator | =============================================================================== 2025-09-17 16:38:49.838345 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 149.14s 2025-09-17 16:38:49.838366 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.72s 2025-09-17 16:38:49.838386 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-17 16:38:49.838405 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.50s 2025-09-17 16:38:49.838424 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-09-17 16:38:49.838444 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-17 16:38:49.838466 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-09-17 16:38:49.838486 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2025-09-17 16:38:50.077483 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-17 16:38:50.083350 | orchestrator | + set -e 2025-09-17 16:38:50.083379 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-17 16:38:50.083390 | orchestrator | ++ export INTERACTIVE=false 2025-09-17 16:38:50.083402 | orchestrator | ++ INTERACTIVE=false 2025-09-17 16:38:50.083413 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-17 16:38:50.083423 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-17 16:38:50.083993 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-17 16:38:50.084014 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-17 16:38:50.087182 | orchestrator | 2025-09-17 16:38:50.087204 | orchestrator | # OpenStack endpoints 2025-09-17 16:38:50.087215 | orchestrator | 2025-09-17 16:38:50.087253 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-17 16:38:50.087264 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-17 16:38:50.087275 | orchestrator | + export OS_CLOUD=admin 2025-09-17 16:38:50.087286 | orchestrator | + OS_CLOUD=admin 2025-09-17 16:38:50.087297 | orchestrator | + echo 2025-09-17 16:38:50.087308 | orchestrator | + echo '# OpenStack endpoints' 2025-09-17 16:38:50.087319 | orchestrator | + echo 2025-09-17 16:38:50.087330 | orchestrator | + openstack endpoint list 2025-09-17 16:38:53.385205 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-17 16:38:53.385384 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-17 16:38:53.385401 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-17 16:38:53.385413 | orchestrator | | 0579e09b5f2f491b8c67eb456b0427d2 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-17 16:38:53.385425 | orchestrator | | 18082c3fb87e4d3795f3648629dceabd | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-17 16:38:53.385459 | orchestrator | | 1ca55341a92c4b28aeabba21cbc5c780 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-17 16:38:53.385471 | orchestrator | | 1ddef23c975440f994b963514f242746 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-17 16:38:53.385482 | orchestrator | | 2f82c879490d461687d9198637773dd7 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-17 16:38:53.385493 | orchestrator | | 3bad4678b6004b14b847c05e79f2aca9 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-17 16:38:53.385504 | orchestrator | | 44d1d00cf855426e8e72ca2367c45753 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-17 16:38:53.385515 | orchestrator | | 472a7f46bf2b4b1eaf822a2ec3e5375f | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-17 16:38:53.385526 | orchestrator | | 583ba5b14b5344709f7a0d8cc7bd0e03 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-17 16:38:53.385537 | orchestrator | | 58c005e57ca54279bc6c26a9499363dd | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-17 16:38:53.385548 | orchestrator | | 59abcd7fad934211a342f9b5bc7c009f | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-17 16:38:53.385559 | orchestrator | | 64872dd6c3274f55a8bb34f7e12c585c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-17 16:38:53.385570 | orchestrator | | 6544672efdfd41c09d438274de9a09cd | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-17 16:38:53.385581 | orchestrator | | 6c4ad0a148004ec9bd04a1f590795d78 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-17 16:38:53.385592 | orchestrator | | 8216ac3c63dc4d0588e0290108aa57e4 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-17 16:38:53.385603 | orchestrator | | 85a2e19da6e94c2089c966a11a013dfe | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-17 16:38:53.385614 | orchestrator | | 89b655a41ee547269d40416827bfefbd | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-17 16:38:53.385624 | orchestrator | | 89bfa880a4c94a39a18a970a70536d15 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-17 16:38:53.385635 | orchestrator | | 90df0a53df7e43f6b9f71d34b77968db | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-17 16:38:53.385665 | orchestrator | | bdcd54a132554c6bbcac1388c4e2cfc6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-17 16:38:53.385694 | orchestrator | | c06112bcc3d44ce4b01601b34a9c648d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-17 16:38:53.385706 | orchestrator | | dceeccd4d6154af0b288cbbaa178a12a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-17 16:38:53.385724 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-17 16:38:53.607396 | orchestrator | 2025-09-17 16:38:53.607476 | orchestrator | # Cinder 2025-09-17 16:38:53.607491 | orchestrator | 2025-09-17 16:38:53.607503 | orchestrator | + echo 2025-09-17 16:38:53.607515 | orchestrator | + echo '# Cinder' 2025-09-17 16:38:53.607527 | orchestrator | + echo 2025-09-17 16:38:53.607538 | orchestrator | + openstack volume service list 2025-09-17 16:38:56.167277 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-17 16:38:56.167387 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-17 16:38:56.167402 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-17 16:38:56.167414 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-17T16:38:51.000000 | 2025-09-17 16:38:56.167425 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-17T16:38:50.000000 | 2025-09-17 16:38:56.167436 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-17T16:38:51.000000 | 2025-09-17 16:38:56.167447 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-17T16:38:50.000000 | 2025-09-17 16:38:56.167457 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-17T16:38:50.000000 | 2025-09-17 16:38:56.167468 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-17T16:38:51.000000 | 2025-09-17 16:38:56.167496 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-17T16:38:55.000000 | 2025-09-17 16:38:56.167537 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-17T16:38:55.000000 | 2025-09-17 16:38:56.167550 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-17T16:38:46.000000 | 2025-09-17 16:38:56.167561 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-17 16:38:56.402331 | orchestrator | 2025-09-17 16:38:56.402450 | orchestrator | # Neutron 2025-09-17 16:38:56.402466 | orchestrator | 2025-09-17 16:38:56.402478 | orchestrator | + echo 2025-09-17 16:38:56.402490 | orchestrator | + echo '# Neutron' 2025-09-17 16:38:56.402502 | orchestrator | + echo 2025-09-17 16:38:56.402513 | orchestrator | + openstack network agent list 2025-09-17 16:38:59.495962 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-17 16:38:59.496060 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-17 16:38:59.496074 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-17 16:38:59.496086 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496097 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496108 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496119 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496130 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496171 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-17 16:38:59.496183 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-17 16:38:59.496194 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-17 16:38:59.496205 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-17 16:38:59.496216 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-17 16:38:59.718690 | orchestrator | + openstack network service provider list 2025-09-17 16:39:02.407400 | orchestrator | +---------------+------+---------+ 2025-09-17 16:39:02.407548 | orchestrator | | Service Type | Name | Default | 2025-09-17 16:39:02.407564 | orchestrator | +---------------+------+---------+ 2025-09-17 16:39:02.407576 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-17 16:39:02.407587 | orchestrator | +---------------+------+---------+ 2025-09-17 16:39:02.627703 | orchestrator | 2025-09-17 16:39:02.627801 | orchestrator | # Nova 2025-09-17 16:39:02.627815 | orchestrator | 2025-09-17 16:39:02.627827 | orchestrator | + echo 2025-09-17 16:39:02.627838 | orchestrator | + echo '# Nova' 2025-09-17 16:39:02.627850 | orchestrator | + echo 2025-09-17 16:39:02.627861 | orchestrator | + openstack compute service list 2025-09-17 16:39:05.241652 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-17 16:39:05.241777 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-17 16:39:05.241792 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-17 16:39:05.241804 | orchestrator | | b2722200-5da8-4194-963d-99880d53b726 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-17T16:39:02.000000 | 2025-09-17 16:39:05.241815 | orchestrator | | 9f711b3b-bf4e-4122-bc69-0eb50a1d3084 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-17T16:38:58.000000 | 2025-09-17 16:39:05.241826 | orchestrator | | 0efeea33-19b8-4e00-97a1-e8ddcc8e009b | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-17T16:38:58.000000 | 2025-09-17 16:39:05.241837 | orchestrator | | 448396b4-2bda-4dc7-9e24-d51e5f3ec731 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-17T16:38:57.000000 | 2025-09-17 16:39:05.241848 | orchestrator | | cf654ab5-0d1c-45b2-bbbe-b1d9455a14ac | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-17T16:38:59.000000 | 2025-09-17 16:39:05.241859 | orchestrator | | 9c2d25f9-4b3b-4296-80b2-3c5e869155c0 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-17T16:38:59.000000 | 2025-09-17 16:39:05.241870 | orchestrator | | d14f2d86-ee05-474e-9d59-5ea154aa4f85 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-17T16:38:56.000000 | 2025-09-17 16:39:05.241881 | orchestrator | | 0ccf9dd8-d911-4bec-b063-ce4374c90589 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-17T16:38:56.000000 | 2025-09-17 16:39:05.241913 | orchestrator | | 9dcda144-871c-44f7-ad9c-76d744f6fecc | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-17T16:38:56.000000 | 2025-09-17 16:39:05.241924 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-17 16:39:05.467285 | orchestrator | + openstack hypervisor list 2025-09-17 16:39:09.764223 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-17 16:39:09.764403 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-17 16:39:09.764419 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-17 16:39:09.764463 | orchestrator | | eba87240-81b2-4682-aae6-93d57fc8cdd9 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-17 16:39:09.764475 | orchestrator | | 8f0a9cfd-db3d-4748-98a3-46e7f11a5d7d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-17 16:39:09.764486 | orchestrator | | 51a1c2e6-440d-487f-aa74-ed0f29b836fc | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-17 16:39:09.764496 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-17 16:39:09.986848 | orchestrator | 2025-09-17 16:39:09.986964 | orchestrator | # Run OpenStack test play 2025-09-17 16:39:09.986979 | orchestrator | 2025-09-17 16:39:09.986991 | orchestrator | + echo 2025-09-17 16:39:09.987003 | orchestrator | + echo '# Run OpenStack test play' 2025-09-17 16:39:09.987016 | orchestrator | + echo 2025-09-17 16:39:09.987027 | orchestrator | + osism apply --environment openstack test 2025-09-17 16:39:11.757505 | orchestrator | 2025-09-17 16:39:11 | INFO  | Trying to run play test in environment openstack 2025-09-17 16:39:11.820055 | orchestrator | 2025-09-17 16:39:11 | INFO  | Task 55e4d337-3490-4898-b69b-17a0df4986f2 (test) was prepared for execution. 2025-09-17 16:39:11.820117 | orchestrator | 2025-09-17 16:39:11 | INFO  | It takes a moment until task 55e4d337-3490-4898-b69b-17a0df4986f2 (test) has been started and output is visible here. 2025-09-17 16:45:09.483595 | orchestrator | 2025-09-17 16:45:09.483737 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-17 16:45:09.483755 | orchestrator | 2025-09-17 16:45:09.483766 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-17 16:45:09.483776 | orchestrator | Wednesday 17 September 2025 16:39:15 +0000 (0:00:00.073) 0:00:00.073 *** 2025-09-17 16:45:09.483787 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.483804 | orchestrator | 2025-09-17 16:45:09.483821 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-17 16:45:09.483836 | orchestrator | Wednesday 17 September 2025 16:39:18 +0000 (0:00:03.049) 0:00:03.123 *** 2025-09-17 16:45:09.483852 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.483869 | orchestrator | 2025-09-17 16:45:09.483885 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-17 16:45:09.483902 | orchestrator | Wednesday 17 September 2025 16:39:22 +0000 (0:00:03.700) 0:00:06.823 *** 2025-09-17 16:45:09.483918 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.483934 | orchestrator | 2025-09-17 16:45:09.483951 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-17 16:45:09.483967 | orchestrator | Wednesday 17 September 2025 16:39:28 +0000 (0:00:06.127) 0:00:12.950 *** 2025-09-17 16:45:09.483984 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484000 | orchestrator | 2025-09-17 16:45:09.484017 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-17 16:45:09.484033 | orchestrator | Wednesday 17 September 2025 16:39:32 +0000 (0:00:03.909) 0:00:16.860 *** 2025-09-17 16:45:09.484049 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484066 | orchestrator | 2025-09-17 16:45:09.484084 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-17 16:45:09.484100 | orchestrator | Wednesday 17 September 2025 16:39:36 +0000 (0:00:04.037) 0:00:20.898 *** 2025-09-17 16:45:09.484116 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-17 16:45:09.484132 | orchestrator | changed: [localhost] => (item=member) 2025-09-17 16:45:09.484150 | orchestrator | changed: [localhost] => (item=creator) 2025-09-17 16:45:09.484168 | orchestrator | 2025-09-17 16:45:09.484186 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-17 16:45:09.484204 | orchestrator | Wednesday 17 September 2025 16:39:47 +0000 (0:00:11.677) 0:00:32.575 *** 2025-09-17 16:45:09.484220 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484297 | orchestrator | 2025-09-17 16:45:09.484317 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-17 16:45:09.484364 | orchestrator | Wednesday 17 September 2025 16:39:52 +0000 (0:00:04.700) 0:00:37.276 *** 2025-09-17 16:45:09.484385 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484404 | orchestrator | 2025-09-17 16:45:09.484424 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-17 16:45:09.484442 | orchestrator | Wednesday 17 September 2025 16:39:58 +0000 (0:00:05.813) 0:00:43.089 *** 2025-09-17 16:45:09.484460 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484479 | orchestrator | 2025-09-17 16:45:09.484498 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-17 16:45:09.484518 | orchestrator | Wednesday 17 September 2025 16:40:03 +0000 (0:00:04.679) 0:00:47.769 *** 2025-09-17 16:45:09.484535 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484553 | orchestrator | 2025-09-17 16:45:09.484570 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-17 16:45:09.484588 | orchestrator | Wednesday 17 September 2025 16:40:06 +0000 (0:00:03.867) 0:00:51.636 *** 2025-09-17 16:45:09.484605 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484623 | orchestrator | 2025-09-17 16:45:09.484639 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-17 16:45:09.484656 | orchestrator | Wednesday 17 September 2025 16:40:10 +0000 (0:00:03.607) 0:00:55.244 *** 2025-09-17 16:45:09.484707 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484727 | orchestrator | 2025-09-17 16:45:09.484743 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-17 16:45:09.484760 | orchestrator | Wednesday 17 September 2025 16:40:14 +0000 (0:00:03.633) 0:00:58.877 *** 2025-09-17 16:45:09.484777 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.484794 | orchestrator | 2025-09-17 16:45:09.484811 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-17 16:45:09.484828 | orchestrator | Wednesday 17 September 2025 16:40:29 +0000 (0:00:15.601) 0:01:14.479 *** 2025-09-17 16:45:09.484845 | orchestrator | changed: [localhost] => (item=test) 2025-09-17 16:45:09.484861 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-17 16:45:09.484876 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-17 16:45:09.484893 | orchestrator | 2025-09-17 16:45:09.484910 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-17 16:45:09.484928 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-17 16:45:09.484944 | orchestrator | 2025-09-17 16:45:09.484961 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-17 16:45:09.484971 | orchestrator | 2025-09-17 16:45:09.484981 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-17 16:45:09.484990 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-17 16:45:09.485000 | orchestrator | 2025-09-17 16:45:09.485009 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-17 16:45:09.485019 | orchestrator | Wednesday 17 September 2025 16:43:48 +0000 (0:03:19.108) 0:04:33.588 *** 2025-09-17 16:45:09.485029 | orchestrator | changed: [localhost] => (item=test) 2025-09-17 16:45:09.485038 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-17 16:45:09.485048 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-17 16:45:09.485058 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-17 16:45:09.485067 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-17 16:45:09.485077 | orchestrator | 2025-09-17 16:45:09.485086 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-17 16:45:09.485096 | orchestrator | Wednesday 17 September 2025 16:44:11 +0000 (0:00:23.011) 0:04:56.600 *** 2025-09-17 16:45:09.485105 | orchestrator | changed: [localhost] => (item=test) 2025-09-17 16:45:09.485115 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-17 16:45:09.485145 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-17 16:45:09.485155 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-17 16:45:09.485165 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-17 16:45:09.485175 | orchestrator | 2025-09-17 16:45:09.485196 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-17 16:45:09.485206 | orchestrator | Wednesday 17 September 2025 16:44:43 +0000 (0:00:32.089) 0:05:28.690 *** 2025-09-17 16:45:09.485216 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.485226 | orchestrator | 2025-09-17 16:45:09.485236 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-17 16:45:09.485245 | orchestrator | Wednesday 17 September 2025 16:44:50 +0000 (0:00:06.770) 0:05:35.460 *** 2025-09-17 16:45:09.485255 | orchestrator | changed: [localhost] 2025-09-17 16:45:09.485265 | orchestrator | 2025-09-17 16:45:09.485274 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-17 16:45:09.485284 | orchestrator | Wednesday 17 September 2025 16:45:04 +0000 (0:00:13.660) 0:05:49.121 *** 2025-09-17 16:45:09.485294 | orchestrator | ok: [localhost] 2025-09-17 16:45:09.485303 | orchestrator | 2025-09-17 16:45:09.485313 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-17 16:45:09.485323 | orchestrator | Wednesday 17 September 2025 16:45:09 +0000 (0:00:04.887) 0:05:54.008 *** 2025-09-17 16:45:09.485332 | orchestrator | ok: [localhost] => { 2025-09-17 16:45:09.485342 | orchestrator |  "msg": "192.168.112.136" 2025-09-17 16:45:09.485352 | orchestrator | } 2025-09-17 16:45:09.485362 | orchestrator | 2025-09-17 16:45:09.485371 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-17 16:45:09.485385 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-17 16:45:09.485396 | orchestrator | 2025-09-17 16:45:09.485406 | orchestrator | 2025-09-17 16:45:09.485415 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-17 16:45:09.485425 | orchestrator | Wednesday 17 September 2025 16:45:09 +0000 (0:00:00.036) 0:05:54.044 *** 2025-09-17 16:45:09.485434 | orchestrator | =============================================================================== 2025-09-17 16:45:09.485444 | orchestrator | Create test instances ------------------------------------------------- 199.11s 2025-09-17 16:45:09.485454 | orchestrator | Add tag to instances --------------------------------------------------- 32.09s 2025-09-17 16:45:09.485463 | orchestrator | Add metadata to instances ---------------------------------------------- 23.01s 2025-09-17 16:45:09.485473 | orchestrator | Create test network topology ------------------------------------------- 15.60s 2025-09-17 16:45:09.485483 | orchestrator | Attach test volume ----------------------------------------------------- 13.66s 2025-09-17 16:45:09.485492 | orchestrator | Add member roles to user test ------------------------------------------ 11.68s 2025-09-17 16:45:09.485502 | orchestrator | Create test volume ------------------------------------------------------ 6.77s 2025-09-17 16:45:09.485511 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.13s 2025-09-17 16:45:09.485521 | orchestrator | Create ssh security group ----------------------------------------------- 5.81s 2025-09-17 16:45:09.485531 | orchestrator | Create floating ip address ---------------------------------------------- 4.89s 2025-09-17 16:45:09.485540 | orchestrator | Create test server group ------------------------------------------------ 4.70s 2025-09-17 16:45:09.485550 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.68s 2025-09-17 16:45:09.485559 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-09-17 16:45:09.485569 | orchestrator | Create test project ----------------------------------------------------- 3.91s 2025-09-17 16:45:09.485585 | orchestrator | Create icmp security group ---------------------------------------------- 3.87s 2025-09-17 16:45:09.485595 | orchestrator | Create test-admin user -------------------------------------------------- 3.70s 2025-09-17 16:45:09.485604 | orchestrator | Create test keypair ----------------------------------------------------- 3.63s 2025-09-17 16:45:09.485614 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.61s 2025-09-17 16:45:09.485623 | orchestrator | Create test domain ------------------------------------------------------ 3.05s 2025-09-17 16:45:09.485638 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-09-17 16:45:09.665462 | orchestrator | + server_list 2025-09-17 16:45:09.665544 | orchestrator | + openstack --os-cloud test server list 2025-09-17 16:45:13.156073 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-17 16:45:13.156163 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-17 16:45:13.156178 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-17 16:45:13.156190 | orchestrator | | b850c91e-51ae-40d0-9f5d-78d7cae403f4 | test-4 | ACTIVE | auto_allocated_network=10.42.0.33, 192.168.112.180 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-17 16:45:13.156201 | orchestrator | | 50668331-08af-4d90-88df-6f8d26fbd35a | test-3 | ACTIVE | auto_allocated_network=10.42.0.39, 192.168.112.132 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-17 16:45:13.156212 | orchestrator | | 17c0abfe-549c-4636-b65e-25baa66d7300 | test-2 | ACTIVE | auto_allocated_network=10.42.0.13, 192.168.112.199 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-17 16:45:13.156223 | orchestrator | | 4468d33f-094e-4460-b9bf-a041150689f2 | test-1 | ACTIVE | auto_allocated_network=10.42.0.51, 192.168.112.133 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-17 16:45:13.156234 | orchestrator | | 8d082c55-e0fc-4251-bb7b-1497fa6b27af | test | ACTIVE | auto_allocated_network=10.42.0.18, 192.168.112.136 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-09-17 16:45:13.156245 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-09-17 16:45:13.332974 | orchestrator | + openstack --os-cloud test server show test 2025-09-17 16:45:16.745233 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:16.745337 | orchestrator | | Field | Value | 2025-09-17 16:45:16.745353 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:16.745366 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-17 16:45:16.745378 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-17 16:45:16.745389 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-17 16:45:16.745420 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-17 16:45:16.745441 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-17 16:45:16.745453 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-17 16:45:16.745464 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-17 16:45:16.745475 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-17 16:45:16.745504 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-17 16:45:16.745516 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-17 16:45:16.745527 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-17 16:45:16.745538 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-17 16:45:16.745549 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-17 16:45:16.745560 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-17 16:45:16.745577 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-17 16:45:16.745593 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-17T16:41:00.000000 | 2025-09-17 16:45:16.745604 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-17 16:45:16.745615 | orchestrator | | accessIPv4 | | 2025-09-17 16:45:16.745626 | orchestrator | | accessIPv6 | | 2025-09-17 16:45:16.745637 | orchestrator | | addresses | auto_allocated_network=10.42.0.18, 192.168.112.136 | 2025-09-17 16:45:16.745654 | orchestrator | | config_drive | | 2025-09-17 16:45:16.745666 | orchestrator | | created | 2025-09-17T16:40:38Z | 2025-09-17 16:45:16.745677 | orchestrator | | description | None | 2025-09-17 16:45:16.745719 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-17 16:45:16.745730 | orchestrator | | hostId | 79379aedfb0e2fecb1f084c105b1d3d42f4ba727138ccaeddc5e59d4 | 2025-09-17 16:45:16.745748 | orchestrator | | host_status | None | 2025-09-17 16:45:16.745765 | orchestrator | | id | 8d082c55-e0fc-4251-bb7b-1497fa6b27af | 2025-09-17 16:45:16.745778 | orchestrator | | image | Cirros 0.6.2 (e4ff1db9-a4ab-4cfb-b4b5-1af4d0d06f77) | 2025-09-17 16:45:16.745791 | orchestrator | | key_name | test | 2025-09-17 16:45:16.745804 | orchestrator | | locked | False | 2025-09-17 16:45:16.745816 | orchestrator | | locked_reason | None | 2025-09-17 16:45:16.745828 | orchestrator | | name | test | 2025-09-17 16:45:16.745848 | orchestrator | | pinned_availability_zone | None | 2025-09-17 16:45:16.745861 | orchestrator | | progress | 0 | 2025-09-17 16:45:16.745873 | orchestrator | | project_id | 258bdfd398394f8ba256a5cf0403007e | 2025-09-17 16:45:16.745886 | orchestrator | | properties | hostname='test' | 2025-09-17 16:45:16.745905 | orchestrator | | security_groups | name='ssh' | 2025-09-17 16:45:16.745918 | orchestrator | | | name='icmp' | 2025-09-17 16:45:16.745934 | orchestrator | | server_groups | None | 2025-09-17 16:45:16.745947 | orchestrator | | status | ACTIVE | 2025-09-17 16:45:16.745959 | orchestrator | | tags | test | 2025-09-17 16:45:16.745971 | orchestrator | | trusted_image_certificates | None | 2025-09-17 16:45:16.745983 | orchestrator | | updated | 2025-09-17T16:43:53Z | 2025-09-17 16:45:16.746001 | orchestrator | | user_id | a679aefbbf17429184369072540da654 | 2025-09-17 16:45:16.746014 | orchestrator | | volumes_attached | delete_on_termination='False', id='9c59fd69-d1cb-4774-80da-0857b40ec848' | 2025-09-17 16:45:16.750528 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:16.979792 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-17 16:45:20.001092 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:20.001200 | orchestrator | | Field | Value | 2025-09-17 16:45:20.001215 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:20.001244 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-17 16:45:20.001255 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-17 16:45:20.001266 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-17 16:45:20.001277 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-17 16:45:20.001288 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-17 16:45:20.001300 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-17 16:45:20.001310 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-17 16:45:20.001321 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-17 16:45:20.001372 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-17 16:45:20.001385 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-17 16:45:20.001396 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-17 16:45:20.001407 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-17 16:45:20.001419 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-17 16:45:20.001430 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-17 16:45:20.001449 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-17 16:45:20.001460 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-17T16:41:43.000000 | 2025-09-17 16:45:20.001471 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-17 16:45:20.001482 | orchestrator | | accessIPv4 | | 2025-09-17 16:45:20.001493 | orchestrator | | accessIPv6 | | 2025-09-17 16:45:20.001511 | orchestrator | | addresses | auto_allocated_network=10.42.0.51, 192.168.112.133 | 2025-09-17 16:45:20.001529 | orchestrator | | config_drive | | 2025-09-17 16:45:20.001541 | orchestrator | | created | 2025-09-17T16:41:22Z | 2025-09-17 16:45:20.001552 | orchestrator | | description | None | 2025-09-17 16:45:20.001563 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-17 16:45:20.001579 | orchestrator | | hostId | 57a0991a16e78914392275b92aff6429335cc07443be1a8d5fd8a934 | 2025-09-17 16:45:20.001590 | orchestrator | | host_status | None | 2025-09-17 16:45:20.001602 | orchestrator | | id | 4468d33f-094e-4460-b9bf-a041150689f2 | 2025-09-17 16:45:20.001613 | orchestrator | | image | Cirros 0.6.2 (e4ff1db9-a4ab-4cfb-b4b5-1af4d0d06f77) | 2025-09-17 16:45:20.001624 | orchestrator | | key_name | test | 2025-09-17 16:45:20.001635 | orchestrator | | locked | False | 2025-09-17 16:45:20.001652 | orchestrator | | locked_reason | None | 2025-09-17 16:45:20.001663 | orchestrator | | name | test-1 | 2025-09-17 16:45:20.001680 | orchestrator | | pinned_availability_zone | None | 2025-09-17 16:45:20.001753 | orchestrator | | progress | 0 | 2025-09-17 16:45:20.001765 | orchestrator | | project_id | 258bdfd398394f8ba256a5cf0403007e | 2025-09-17 16:45:20.001776 | orchestrator | | properties | hostname='test-1' | 2025-09-17 16:45:20.001793 | orchestrator | | security_groups | name='ssh' | 2025-09-17 16:45:20.001804 | orchestrator | | | name='icmp' | 2025-09-17 16:45:20.001815 | orchestrator | | server_groups | None | 2025-09-17 16:45:20.001826 | orchestrator | | status | ACTIVE | 2025-09-17 16:45:20.001844 | orchestrator | | tags | test | 2025-09-17 16:45:20.001855 | orchestrator | | trusted_image_certificates | None | 2025-09-17 16:45:20.001866 | orchestrator | | updated | 2025-09-17T16:43:58Z | 2025-09-17 16:45:20.001883 | orchestrator | | user_id | a679aefbbf17429184369072540da654 | 2025-09-17 16:45:20.001895 | orchestrator | | volumes_attached | | 2025-09-17 16:45:20.006011 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:20.259813 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-17 16:45:23.380836 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:23.380981 | orchestrator | | Field | Value | 2025-09-17 16:45:23.381001 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:23.381013 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-17 16:45:23.381024 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-17 16:45:23.381061 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-17 16:45:23.381073 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-17 16:45:23.381084 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-17 16:45:23.381095 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-17 16:45:23.381106 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-17 16:45:23.381118 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-17 16:45:23.381149 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-17 16:45:23.381161 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-17 16:45:23.381178 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-17 16:45:23.381189 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-17 16:45:23.381201 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-17 16:45:23.381220 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-17 16:45:23.381232 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-17 16:45:23.381242 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-17T16:42:23.000000 | 2025-09-17 16:45:23.381254 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-17 16:45:23.381265 | orchestrator | | accessIPv4 | | 2025-09-17 16:45:23.381276 | orchestrator | | accessIPv6 | | 2025-09-17 16:45:23.381287 | orchestrator | | addresses | auto_allocated_network=10.42.0.13, 192.168.112.199 | 2025-09-17 16:45:23.381305 | orchestrator | | config_drive | | 2025-09-17 16:45:23.381317 | orchestrator | | created | 2025-09-17T16:42:01Z | 2025-09-17 16:45:23.381328 | orchestrator | | description | None | 2025-09-17 16:45:23.381340 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-17 16:45:23.381357 | orchestrator | | hostId | 77cdda4039293762bf9b7803fdffbb05208ccbec24ac6968d8b401f7 | 2025-09-17 16:45:23.381369 | orchestrator | | host_status | None | 2025-09-17 16:45:23.381380 | orchestrator | | id | 17c0abfe-549c-4636-b65e-25baa66d7300 | 2025-09-17 16:45:23.381391 | orchestrator | | image | Cirros 0.6.2 (e4ff1db9-a4ab-4cfb-b4b5-1af4d0d06f77) | 2025-09-17 16:45:23.381435 | orchestrator | | key_name | test | 2025-09-17 16:45:23.381447 | orchestrator | | locked | False | 2025-09-17 16:45:23.381458 | orchestrator | | locked_reason | None | 2025-09-17 16:45:23.381469 | orchestrator | | name | test-2 | 2025-09-17 16:45:23.381495 | orchestrator | | pinned_availability_zone | None | 2025-09-17 16:45:23.381511 | orchestrator | | progress | 0 | 2025-09-17 16:45:23.381530 | orchestrator | | project_id | 258bdfd398394f8ba256a5cf0403007e | 2025-09-17 16:45:23.381541 | orchestrator | | properties | hostname='test-2' | 2025-09-17 16:45:23.381552 | orchestrator | | security_groups | name='ssh' | 2025-09-17 16:45:23.381563 | orchestrator | | | name='icmp' | 2025-09-17 16:45:23.381574 | orchestrator | | server_groups | None | 2025-09-17 16:45:23.381585 | orchestrator | | status | ACTIVE | 2025-09-17 16:45:23.381596 | orchestrator | | tags | test | 2025-09-17 16:45:23.381607 | orchestrator | | trusted_image_certificates | None | 2025-09-17 16:45:23.381618 | orchestrator | | updated | 2025-09-17T16:44:02Z | 2025-09-17 16:45:23.381634 | orchestrator | | user_id | a679aefbbf17429184369072540da654 | 2025-09-17 16:45:23.381646 | orchestrator | | volumes_attached | | 2025-09-17 16:45:23.385286 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:23.614330 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-17 16:45:26.782686 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:26.782829 | orchestrator | | Field | Value | 2025-09-17 16:45:26.782846 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:26.782858 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-17 16:45:26.782869 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-17 16:45:26.782880 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-17 16:45:26.782891 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-17 16:45:26.782902 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-17 16:45:26.782915 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-17 16:45:26.782935 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-17 16:45:26.783001 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-17 16:45:26.783049 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-17 16:45:26.783072 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-17 16:45:26.783090 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-17 16:45:26.783108 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-17 16:45:26.783128 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-17 16:45:26.783149 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-17 16:45:26.783168 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-17 16:45:26.783189 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-17T16:42:55.000000 | 2025-09-17 16:45:26.783207 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-17 16:45:26.783226 | orchestrator | | accessIPv4 | | 2025-09-17 16:45:26.783256 | orchestrator | | accessIPv6 | | 2025-09-17 16:45:26.783275 | orchestrator | | addresses | auto_allocated_network=10.42.0.39, 192.168.112.132 | 2025-09-17 16:45:26.783296 | orchestrator | | config_drive | | 2025-09-17 16:45:26.783308 | orchestrator | | created | 2025-09-17T16:42:39Z | 2025-09-17 16:45:26.783318 | orchestrator | | description | None | 2025-09-17 16:45:26.783329 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-17 16:45:26.783340 | orchestrator | | hostId | 57a0991a16e78914392275b92aff6429335cc07443be1a8d5fd8a934 | 2025-09-17 16:45:26.783351 | orchestrator | | host_status | None | 2025-09-17 16:45:26.783362 | orchestrator | | id | 50668331-08af-4d90-88df-6f8d26fbd35a | 2025-09-17 16:45:26.783373 | orchestrator | | image | Cirros 0.6.2 (e4ff1db9-a4ab-4cfb-b4b5-1af4d0d06f77) | 2025-09-17 16:45:26.783390 | orchestrator | | key_name | test | 2025-09-17 16:45:26.783401 | orchestrator | | locked | False | 2025-09-17 16:45:26.783412 | orchestrator | | locked_reason | None | 2025-09-17 16:45:26.783428 | orchestrator | | name | test-3 | 2025-09-17 16:45:26.783446 | orchestrator | | pinned_availability_zone | None | 2025-09-17 16:45:26.783457 | orchestrator | | progress | 0 | 2025-09-17 16:45:26.783468 | orchestrator | | project_id | 258bdfd398394f8ba256a5cf0403007e | 2025-09-17 16:45:26.783479 | orchestrator | | properties | hostname='test-3' | 2025-09-17 16:45:26.783490 | orchestrator | | security_groups | name='ssh' | 2025-09-17 16:45:26.783501 | orchestrator | | | name='icmp' | 2025-09-17 16:45:26.783512 | orchestrator | | server_groups | None | 2025-09-17 16:45:26.783529 | orchestrator | | status | ACTIVE | 2025-09-17 16:45:26.783540 | orchestrator | | tags | test | 2025-09-17 16:45:26.783551 | orchestrator | | trusted_image_certificates | None | 2025-09-17 16:45:26.783567 | orchestrator | | updated | 2025-09-17T16:44:07Z | 2025-09-17 16:45:26.783585 | orchestrator | | user_id | a679aefbbf17429184369072540da654 | 2025-09-17 16:45:26.783597 | orchestrator | | volumes_attached | | 2025-09-17 16:45:26.787386 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:27.019453 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-17 16:45:30.120954 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:30.121085 | orchestrator | | Field | Value | 2025-09-17 16:45:30.121102 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:30.121114 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-17 16:45:30.121148 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-17 16:45:30.121160 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-17 16:45:30.121171 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-17 16:45:30.121182 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-17 16:45:30.121194 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-17 16:45:30.121205 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-17 16:45:30.121216 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-17 16:45:30.121245 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-17 16:45:30.121274 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-17 16:45:30.121286 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-17 16:45:30.121304 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-17 16:45:30.121315 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-17 16:45:30.121326 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-17 16:45:30.121338 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-17 16:45:30.121349 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-17T16:43:33.000000 | 2025-09-17 16:45:30.121365 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-17 16:45:30.121376 | orchestrator | | accessIPv4 | | 2025-09-17 16:45:30.121387 | orchestrator | | accessIPv6 | | 2025-09-17 16:45:30.121398 | orchestrator | | addresses | auto_allocated_network=10.42.0.33, 192.168.112.180 | 2025-09-17 16:45:30.121417 | orchestrator | | config_drive | | 2025-09-17 16:45:30.121428 | orchestrator | | created | 2025-09-17T16:43:16Z | 2025-09-17 16:45:30.121446 | orchestrator | | description | None | 2025-09-17 16:45:30.121457 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-17 16:45:30.121468 | orchestrator | | hostId | 77cdda4039293762bf9b7803fdffbb05208ccbec24ac6968d8b401f7 | 2025-09-17 16:45:30.121479 | orchestrator | | host_status | None | 2025-09-17 16:45:30.121490 | orchestrator | | id | b850c91e-51ae-40d0-9f5d-78d7cae403f4 | 2025-09-17 16:45:30.121507 | orchestrator | | image | Cirros 0.6.2 (e4ff1db9-a4ab-4cfb-b4b5-1af4d0d06f77) | 2025-09-17 16:45:30.121532 | orchestrator | | key_name | test | 2025-09-17 16:45:30.121551 | orchestrator | | locked | False | 2025-09-17 16:45:30.121569 | orchestrator | | locked_reason | None | 2025-09-17 16:45:30.121586 | orchestrator | | name | test-4 | 2025-09-17 16:45:30.121612 | orchestrator | | pinned_availability_zone | None | 2025-09-17 16:45:30.121643 | orchestrator | | progress | 0 | 2025-09-17 16:45:30.121662 | orchestrator | | project_id | 258bdfd398394f8ba256a5cf0403007e | 2025-09-17 16:45:30.121681 | orchestrator | | properties | hostname='test-4' | 2025-09-17 16:45:30.121748 | orchestrator | | security_groups | name='ssh' | 2025-09-17 16:45:30.121761 | orchestrator | | | name='icmp' | 2025-09-17 16:45:30.121772 | orchestrator | | server_groups | None | 2025-09-17 16:45:30.121783 | orchestrator | | status | ACTIVE | 2025-09-17 16:45:30.121801 | orchestrator | | tags | test | 2025-09-17 16:45:30.121812 | orchestrator | | trusted_image_certificates | None | 2025-09-17 16:45:30.121823 | orchestrator | | updated | 2025-09-17T16:44:11Z | 2025-09-17 16:45:30.121842 | orchestrator | | user_id | a679aefbbf17429184369072540da654 | 2025-09-17 16:45:30.121861 | orchestrator | | volumes_attached | | 2025-09-17 16:45:30.125355 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-17 16:45:30.361897 | orchestrator | + server_ping 2025-09-17 16:45:30.362947 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-17 16:45:30.362997 | orchestrator | ++ tr -d '\r' 2025-09-17 16:45:33.195346 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-17 16:45:33.195441 | orchestrator | + ping -c3 192.168.112.133 2025-09-17 16:45:33.209376 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-09-17 16:45:33.209436 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=6.98 ms 2025-09-17 16:45:34.205322 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.28 ms 2025-09-17 16:45:35.206497 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.94 ms 2025-09-17 16:45:35.206618 | orchestrator | 2025-09-17 16:45:35.206643 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-09-17 16:45:35.206663 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-17 16:45:35.206682 | orchestrator | rtt min/avg/max/mdev = 1.940/3.735/6.984/2.301 ms 2025-09-17 16:45:35.207653 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-17 16:45:35.207729 | orchestrator | + ping -c3 192.168.112.199 2025-09-17 16:45:35.218128 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-09-17 16:45:35.218170 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=6.89 ms 2025-09-17 16:45:36.215066 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.39 ms 2025-09-17 16:45:37.216801 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=1.95 ms 2025-09-17 16:45:37.216902 | orchestrator | 2025-09-17 16:45:37.216918 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-09-17 16:45:37.216931 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-17 16:45:37.216943 | orchestrator | rtt min/avg/max/mdev = 1.952/3.742/6.888/2.231 ms 2025-09-17 16:45:37.217297 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-17 16:45:37.217323 | orchestrator | + ping -c3 192.168.112.180 2025-09-17 16:45:37.229641 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2025-09-17 16:45:37.229697 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=8.82 ms 2025-09-17 16:45:38.226102 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.80 ms 2025-09-17 16:45:39.227671 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=1.89 ms 2025-09-17 16:45:39.227807 | orchestrator | 2025-09-17 16:45:39.227824 | orchestrator | --- 192.168.112.180 ping statistics --- 2025-09-17 16:45:39.227836 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-17 16:45:39.227848 | orchestrator | rtt min/avg/max/mdev = 1.886/4.501/8.821/3.077 ms 2025-09-17 16:45:39.227860 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-17 16:45:39.227871 | orchestrator | + ping -c3 192.168.112.136 2025-09-17 16:45:39.237938 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-09-17 16:45:39.237974 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=6.26 ms 2025-09-17 16:45:40.235504 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=1.98 ms 2025-09-17 16:45:41.236264 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.57 ms 2025-09-17 16:45:41.236367 | orchestrator | 2025-09-17 16:45:41.236383 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-09-17 16:45:41.236396 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-17 16:45:41.236408 | orchestrator | rtt min/avg/max/mdev = 1.572/3.270/6.264/2.123 ms 2025-09-17 16:45:41.237530 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-17 16:45:41.237554 | orchestrator | + ping -c3 192.168.112.132 2025-09-17 16:45:41.248901 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-09-17 16:45:41.248953 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.40 ms 2025-09-17 16:45:42.246364 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.82 ms 2025-09-17 16:45:43.248105 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.81 ms 2025-09-17 16:45:43.249043 | orchestrator | 2025-09-17 16:45:43.249100 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-09-17 16:45:43.249121 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-17 16:45:43.249140 | orchestrator | rtt min/avg/max/mdev = 1.805/4.005/7.395/2.432 ms 2025-09-17 16:45:43.249159 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-17 16:45:43.381219 | orchestrator | ok: Runtime: 0:12:24.107771 2025-09-17 16:45:43.430340 | 2025-09-17 16:45:43.430457 | TASK [Run tempest] 2025-09-17 16:45:43.965608 | orchestrator | skipping: Conditional result was False 2025-09-17 16:45:43.988487 | 2025-09-17 16:45:43.988728 | TASK [Check prometheus alert status] 2025-09-17 16:45:44.529338 | orchestrator | skipping: Conditional result was False 2025-09-17 16:45:44.532468 | 2025-09-17 16:45:44.532649 | PLAY RECAP 2025-09-17 16:45:44.532812 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-17 16:45:44.532883 | 2025-09-17 16:45:44.743722 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-17 16:45:44.746402 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-17 16:45:45.486130 | 2025-09-17 16:45:45.486403 | PLAY [Post output play] 2025-09-17 16:45:45.503013 | 2025-09-17 16:45:45.503161 | LOOP [stage-output : Register sources] 2025-09-17 16:45:45.573336 | 2025-09-17 16:45:45.573659 | TASK [stage-output : Check sudo] 2025-09-17 16:45:46.380856 | orchestrator | sudo: a password is required 2025-09-17 16:45:46.610213 | orchestrator | ok: Runtime: 0:00:00.012611 2025-09-17 16:45:46.617382 | 2025-09-17 16:45:46.617499 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-17 16:45:46.647984 | 2025-09-17 16:45:46.648168 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-17 16:45:46.715494 | orchestrator | ok 2025-09-17 16:45:46.724220 | 2025-09-17 16:45:46.724394 | LOOP [stage-output : Ensure target folders exist] 2025-09-17 16:45:47.154988 | orchestrator | ok: "docs" 2025-09-17 16:45:47.155303 | 2025-09-17 16:45:47.398727 | orchestrator | ok: "artifacts" 2025-09-17 16:45:47.647318 | orchestrator | ok: "logs" 2025-09-17 16:45:47.670290 | 2025-09-17 16:45:47.670459 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-17 16:45:47.705886 | 2025-09-17 16:45:47.706137 | TASK [stage-output : Make all log files readable] 2025-09-17 16:45:47.968317 | orchestrator | ok 2025-09-17 16:45:47.974533 | 2025-09-17 16:45:47.974636 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-17 16:45:48.008767 | orchestrator | skipping: Conditional result was False 2025-09-17 16:45:48.025177 | 2025-09-17 16:45:48.025341 | TASK [stage-output : Discover log files for compression] 2025-09-17 16:45:48.049745 | orchestrator | skipping: Conditional result was False 2025-09-17 16:45:48.065066 | 2025-09-17 16:45:48.065210 | LOOP [stage-output : Archive everything from logs] 2025-09-17 16:45:48.110596 | 2025-09-17 16:45:48.110763 | PLAY [Post cleanup play] 2025-09-17 16:45:48.119475 | 2025-09-17 16:45:48.119582 | TASK [Set cloud fact (Zuul deployment)] 2025-09-17 16:45:48.169822 | orchestrator | ok 2025-09-17 16:45:48.178329 | 2025-09-17 16:45:48.178442 | TASK [Set cloud fact (local deployment)] 2025-09-17 16:45:48.201968 | orchestrator | skipping: Conditional result was False 2025-09-17 16:45:48.210676 | 2025-09-17 16:45:48.210797 | TASK [Clean the cloud environment] 2025-09-17 16:45:48.778644 | orchestrator | 2025-09-17 16:45:48 - clean up servers 2025-09-17 16:45:49.714772 | orchestrator | 2025-09-17 16:45:49 - testbed-manager 2025-09-17 16:45:49.801542 | orchestrator | 2025-09-17 16:45:49 - testbed-node-1 2025-09-17 16:45:49.897248 | orchestrator | 2025-09-17 16:45:49 - testbed-node-2 2025-09-17 16:45:49.980984 | orchestrator | 2025-09-17 16:45:49 - testbed-node-0 2025-09-17 16:45:50.074170 | orchestrator | 2025-09-17 16:45:50 - testbed-node-5 2025-09-17 16:45:50.168303 | orchestrator | 2025-09-17 16:45:50 - testbed-node-3 2025-09-17 16:45:50.260954 | orchestrator | 2025-09-17 16:45:50 - testbed-node-4 2025-09-17 16:45:50.367808 | orchestrator | 2025-09-17 16:45:50 - clean up keypairs 2025-09-17 16:45:50.386662 | orchestrator | 2025-09-17 16:45:50 - testbed 2025-09-17 16:45:50.419097 | orchestrator | 2025-09-17 16:45:50 - wait for servers to be gone 2025-09-17 16:45:59.252134 | orchestrator | 2025-09-17 16:45:59 - clean up ports 2025-09-17 16:45:59.442850 | orchestrator | 2025-09-17 16:45:59 - 13afbc81-da52-4c80-a43f-c80a15d76375 2025-09-17 16:45:59.740841 | orchestrator | 2025-09-17 16:45:59 - 14623353-a3fa-419a-af86-bf80f2989f3a 2025-09-17 16:46:00.043578 | orchestrator | 2025-09-17 16:46:00 - 1e9dea3a-51cc-4526-93d3-5f916798e220 2025-09-17 16:46:00.812901 | orchestrator | 2025-09-17 16:46:00 - 76c70a21-f7ad-4206-830d-0bf974997a6b 2025-09-17 16:46:01.060211 | orchestrator | 2025-09-17 16:46:01 - 79506143-bacf-4bfa-9712-50fe652bc475 2025-09-17 16:46:01.298014 | orchestrator | 2025-09-17 16:46:01 - b6fbf860-0639-4376-a03f-e543988c1c4f 2025-09-17 16:46:01.565871 | orchestrator | 2025-09-17 16:46:01 - cde2ea7e-fe52-4910-90e8-b36d87f422ac 2025-09-17 16:46:01.990449 | orchestrator | 2025-09-17 16:46:01 - clean up volumes 2025-09-17 16:46:02.097442 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-manager-base 2025-09-17 16:46:02.135486 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-0-node-base 2025-09-17 16:46:02.171557 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-5-node-base 2025-09-17 16:46:02.206653 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-4-node-base 2025-09-17 16:46:02.252277 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-1-node-base 2025-09-17 16:46:02.292686 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-2-node-base 2025-09-17 16:46:02.330532 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-3-node-base 2025-09-17 16:46:02.373799 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-1-node-4 2025-09-17 16:46:02.411782 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-6-node-3 2025-09-17 16:46:02.455591 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-3-node-3 2025-09-17 16:46:02.495259 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-0-node-3 2025-09-17 16:46:02.536776 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-2-node-5 2025-09-17 16:46:02.578828 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-7-node-4 2025-09-17 16:46:02.624594 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-8-node-5 2025-09-17 16:46:02.666678 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-5-node-5 2025-09-17 16:46:02.711966 | orchestrator | 2025-09-17 16:46:02 - testbed-volume-4-node-4 2025-09-17 16:46:02.756481 | orchestrator | 2025-09-17 16:46:02 - disconnect routers 2025-09-17 16:46:02.874262 | orchestrator | 2025-09-17 16:46:02 - testbed 2025-09-17 16:46:03.974835 | orchestrator | 2025-09-17 16:46:03 - clean up subnets 2025-09-17 16:46:04.031400 | orchestrator | 2025-09-17 16:46:04 - subnet-testbed-management 2025-09-17 16:46:04.209916 | orchestrator | 2025-09-17 16:46:04 - clean up networks 2025-09-17 16:46:04.391061 | orchestrator | 2025-09-17 16:46:04 - net-testbed-management 2025-09-17 16:46:04.657818 | orchestrator | 2025-09-17 16:46:04 - clean up security groups 2025-09-17 16:46:04.699484 | orchestrator | 2025-09-17 16:46:04 - testbed-node 2025-09-17 16:46:04.826575 | orchestrator | 2025-09-17 16:46:04 - testbed-management 2025-09-17 16:46:04.931671 | orchestrator | 2025-09-17 16:46:04 - clean up floating ips 2025-09-17 16:46:04.970752 | orchestrator | 2025-09-17 16:46:04 - 81.163.192.205 2025-09-17 16:46:05.322545 | orchestrator | 2025-09-17 16:46:05 - clean up routers 2025-09-17 16:46:05.434102 | orchestrator | 2025-09-17 16:46:05 - testbed 2025-09-17 16:46:06.794652 | orchestrator | ok: Runtime: 0:00:18.247779 2025-09-17 16:46:06.798916 | 2025-09-17 16:46:06.799088 | PLAY RECAP 2025-09-17 16:46:06.799251 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-17 16:46:06.799321 | 2025-09-17 16:46:06.932449 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-17 16:46:06.934971 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-17 16:46:07.705521 | 2025-09-17 16:46:07.705676 | PLAY [Cleanup play] 2025-09-17 16:46:07.721943 | 2025-09-17 16:46:07.722069 | TASK [Set cloud fact (Zuul deployment)] 2025-09-17 16:46:07.777514 | orchestrator | ok 2025-09-17 16:46:07.785622 | 2025-09-17 16:46:07.785748 | TASK [Set cloud fact (local deployment)] 2025-09-17 16:46:07.820027 | orchestrator | skipping: Conditional result was False 2025-09-17 16:46:07.827939 | 2025-09-17 16:46:07.828041 | TASK [Clean the cloud environment] 2025-09-17 16:46:08.909629 | orchestrator | 2025-09-17 16:46:08 - clean up servers 2025-09-17 16:46:09.470117 | orchestrator | 2025-09-17 16:46:09 - clean up keypairs 2025-09-17 16:46:09.487082 | orchestrator | 2025-09-17 16:46:09 - wait for servers to be gone 2025-09-17 16:46:09.526138 | orchestrator | 2025-09-17 16:46:09 - clean up ports 2025-09-17 16:46:09.602645 | orchestrator | 2025-09-17 16:46:09 - clean up volumes 2025-09-17 16:46:09.663107 | orchestrator | 2025-09-17 16:46:09 - disconnect routers 2025-09-17 16:46:09.693373 | orchestrator | 2025-09-17 16:46:09 - clean up subnets 2025-09-17 16:46:09.712712 | orchestrator | 2025-09-17 16:46:09 - clean up networks 2025-09-17 16:46:09.883437 | orchestrator | 2025-09-17 16:46:09 - clean up security groups 2025-09-17 16:46:09.921150 | orchestrator | 2025-09-17 16:46:09 - clean up floating ips 2025-09-17 16:46:09.953326 | orchestrator | 2025-09-17 16:46:09 - clean up routers 2025-09-17 16:46:10.366622 | orchestrator | ok: Runtime: 0:00:01.412122 2025-09-17 16:46:10.370673 | 2025-09-17 16:46:10.370935 | PLAY RECAP 2025-09-17 16:46:10.371090 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-17 16:46:10.371162 | 2025-09-17 16:46:10.494404 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-17 16:46:10.495484 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-17 16:46:11.223896 | 2025-09-17 16:46:11.224050 | PLAY [Base post-fetch] 2025-09-17 16:46:11.239163 | 2025-09-17 16:46:11.239304 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-17 16:46:11.296076 | orchestrator | skipping: Conditional result was False 2025-09-17 16:46:11.309604 | 2025-09-17 16:46:11.309791 | TASK [fetch-output : Set log path for single node] 2025-09-17 16:46:11.359102 | orchestrator | ok 2025-09-17 16:46:11.368074 | 2025-09-17 16:46:11.368210 | LOOP [fetch-output : Ensure local output dirs] 2025-09-17 16:46:11.825422 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/logs" 2025-09-17 16:46:12.093791 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/artifacts" 2025-09-17 16:46:12.362617 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8ca14ae0db454334a6aae0a72c6275fe/work/docs" 2025-09-17 16:46:12.387185 | 2025-09-17 16:46:12.387484 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-17 16:46:13.297346 | orchestrator | changed: .d..t...... ./ 2025-09-17 16:46:13.297655 | orchestrator | changed: All items complete 2025-09-17 16:46:13.297702 | 2025-09-17 16:46:14.001418 | orchestrator | changed: .d..t...... ./ 2025-09-17 16:46:14.730964 | orchestrator | changed: .d..t...... ./ 2025-09-17 16:46:14.754509 | 2025-09-17 16:46:14.754634 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-17 16:46:14.789000 | orchestrator | skipping: Conditional result was False 2025-09-17 16:46:14.792591 | orchestrator | skipping: Conditional result was False 2025-09-17 16:46:14.813390 | 2025-09-17 16:46:14.813500 | PLAY RECAP 2025-09-17 16:46:14.813581 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-17 16:46:14.813624 | 2025-09-17 16:46:14.931540 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-17 16:46:14.933417 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-17 16:46:15.678213 | 2025-09-17 16:46:15.678420 | PLAY [Base post] 2025-09-17 16:46:15.693217 | 2025-09-17 16:46:15.693383 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-17 16:46:16.601974 | orchestrator | changed 2025-09-17 16:46:16.611558 | 2025-09-17 16:46:16.611696 | PLAY RECAP 2025-09-17 16:46:16.611766 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-17 16:46:16.611847 | 2025-09-17 16:46:16.732195 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-17 16:46:16.733249 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-17 16:46:17.524849 | 2025-09-17 16:46:17.525029 | PLAY [Base post-logs] 2025-09-17 16:46:17.536102 | 2025-09-17 16:46:17.536301 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-17 16:46:17.999295 | localhost | changed 2025-09-17 16:46:18.009482 | 2025-09-17 16:46:18.009650 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-17 16:46:18.046216 | localhost | ok 2025-09-17 16:46:18.050211 | 2025-09-17 16:46:18.050388 | TASK [Set zuul-log-path fact] 2025-09-17 16:46:18.078194 | localhost | ok 2025-09-17 16:46:18.092671 | 2025-09-17 16:46:18.092829 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-17 16:46:18.131493 | localhost | ok 2025-09-17 16:46:18.138556 | 2025-09-17 16:46:18.138744 | TASK [upload-logs : Create log directories] 2025-09-17 16:46:18.648648 | localhost | changed 2025-09-17 16:46:18.653878 | 2025-09-17 16:46:18.654042 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-17 16:46:19.157184 | localhost -> localhost | ok: Runtime: 0:00:00.007069 2025-09-17 16:46:19.165958 | 2025-09-17 16:46:19.166167 | TASK [upload-logs : Upload logs to log server] 2025-09-17 16:46:19.721599 | localhost | Output suppressed because no_log was given 2025-09-17 16:46:19.724163 | 2025-09-17 16:46:19.724317 | LOOP [upload-logs : Compress console log and json output] 2025-09-17 16:46:19.782463 | localhost | skipping: Conditional result was False 2025-09-17 16:46:19.787618 | localhost | skipping: Conditional result was False 2025-09-17 16:46:19.794495 | 2025-09-17 16:46:19.794672 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-17 16:46:19.851553 | localhost | skipping: Conditional result was False 2025-09-17 16:46:19.852209 | 2025-09-17 16:46:19.855597 | localhost | skipping: Conditional result was False 2025-09-17 16:46:19.868275 | 2025-09-17 16:46:19.868448 | LOOP [upload-logs : Upload console log and json output]